future

Future Interface Experiences

Part 2: 10 - 22 Pictures That Prove We’re Living In The Damn Future

11. This tree-removal device:

12. Sand being controlled by sound:

12.gif

13. This table:

14. A portable, single-line printer:

14.gif

15. This camera balancer:

16. This ruler that automatically measures angles, etc.:

16.gif

17. Invisible glass putty:

17.gif

18. This man’s juggling prowess:

19. This beer drone:

19.gif

20. This fan that runs off the heat of your hand:

21. The reaction this liquid has to the man on the right’s chemical-treated clothing:

21.gif

22. Just keep this in mind. 1994 vs. 2014:

Part 1: 1 - 10 Pictures That Prove We’re Living In The Damn Future

1. Smart glass that obscures the bathroom when you lock it:

1.gif

2. A wheelchair that can go up stairs:

3. This ice cream app:

3.gif

4. This garbage can:

4.gif

5. This clock that writes the time for you:

6. This progression:

6.jpg

7. This drone camera that follows you wherever you go:

8. This Harry Potter-esque ad:

8.gif

9. An app that translates words in real time:

10. This zipper design that won’t let you down:

10.gif

20 Crucial Terms Every 21st Century Futurist Should Know

20 Crucial Terms Every 21st Century Futurist Should KnowSEXPAND

We live in an era of accelerating change, when scientific and technological advancements are arriving rapidly. As a result, we are developing a new language to describe our civilization as it evolves. Here are 20 terms and concepts that you’ll need to navigate our future.

Back in 2007 I put together a list of terms every self-respecting futurist should be familiar with. But now, some seven years later, it’s time for an update. I reached out to several futurists, asking them which terms or phrases have emerged or gained relevance since that time. These forward-looking thinkers provided me with some fascinating and provocative suggestions — some familiar to me, others completely new, and some a refinement of earlier conceptions. Here are their submissions, including a few of my own.

1. Co-veillance

20 Crucial Terms Every 21st Century Futurist Should Know1SEXPAND

Futurist and scifi novelist David Brin suggested this one. It’s kind of a mash-up between Steve Mann’s sousveillance and Jamais Cascio’s Participatory Panopticon, and a furtherance of his own Transparent Society concept. Brin describes it as: “reciprocal vision and supervision, combining surveillance with aggressively effective sousveillance.” He says it’s “scrutiny from below.” As Brin told io9:

Folks are rightfully worried about surveillance powers that expand every day. Cameras grow quicker, better, smaller, more numerous and mobile at a rate much faster than Moore’s Law (i.e. Brin’s corollary). Liberals foresee Big Brother arising from an oligarchy and faceless corporations, while conservatives fret that Orwellian masters will take over from academia and faceless bureaucrats. Which fear has some validity? All of the above. While millions take Orwell’s warning seriously, the normal reflex is to whine: “Stoplooking at us!” It cannot work. But what if, instead of whining, we all looked back? Countering surveillance with aggressively effective sousveillance — or scrutiny from below? Say by having citizen-access cameras in the camera control rooms, letting us watch the watchers?

Brin says that reciprocal vision and supervision will be hard to enact and establish, but that it has one advantage over “don’t look at us” laws, namely that it actually has a chance of working. (Image credit: 24Novembers/Shutterstock)

2. Multiplex Parenting

This particular meme — suggested to me by the Institute for the Future's Distinguished FellowJamais Cascio — has only recently hit the radar. “It’s in-vitro fertilization,” he says, “but with a germline-genetic mod twist.” Recently sanctioned by the UK, this is the biotechnological advance where a baby can have three genetic parents via sperm, egg, and (separately) mitochondria. It’s meant as a way to flush-out debilitating genetic diseases. But it could also be used for the practice of human trait selection, or so-called “designer babies”. The procedure iscurrently being reviewed for use in the United States. The era of multiplex parents has all but arrived.

In three to five years, a baby will be born with two genetic mothers and one father. This could prove to be a boon for polyamorous families of the… Read…

3. Technological Unemployment

20 Crucial Terms Every 21st Century Futurist Should KnowSEXPAND

Futurist and scifi novelist Ramez Naam says we should be aware of the potential for “technological unemployment.” He describes it as unemployment created by the deployment of technology that can replace human labor. As he told io9,

For example, the potential unemployment of taxi drivers, truck drivers, and so on created by self-driving cars. The phenomenon is an old one, dating back for centuries, and spurred the original Luddite movement, as Ned Ludd is said to have destroyed knitting frames for fear that they would replace human weavers. Technological unemployment in the past has been clearly outpaced (in the long term) by the creation of new wealth from automation and the opening of new job niches for humans, higher in levels of abstraction. The question in the modern age is whether the higher-than-ever speed of such displacement of humans can be matched by the pace of humans developing new skills, and/or by changes in social systems to spread the wealth created.

Indeed, the potential for robotics and AI to replace workers of all stripes is significant, leading to worries of massive rates of unemployment and subsequent social upheaval. These concerns have given rise to another must-know term that could serve as a potential antidote: guaranteed minimum income. (Image credit: Ociacia/Shutterstock)

4. Substrate-Autonomous Person

20 Crucial Terms Every 21st Century Futurist Should KnowSEXPAND

In the future, people won’t be confined to their meatspace bodies. This is what futurist and transhumanist Natasha Vita-More describes as the “Substrate-Autonomous Person.” Eventually, she says, people will be able to form identities in numerous substrates, such as using a “platform diverse body” (a future body that is wearable/usable in the physical/material world — but also exists in computational environments and virtual systems) to route their identity across the biosphere, cybersphere, and virtual environments.

We’re still decades — if not centuries — away from being able to transfer a mind to a supercomputer. It’s a fantastic future prospect that… Read…

"This person would form identities," she told me. "But they would consider their personhood, or sense of identity, to be associated with the environment rather than one exclusive body." Depending on the platform, the substrate-autonomous person would upload and download into a form or shape (body) that conforms to the environment. So, for a biospheric environment, the person would use a biological body, for the Metaverse, a person would use an avatar, and for virtual reality, the person would use a digital form.

5. Intelligence Explosion

20 Crucial Terms Every 21st Century Futurist Should KnowSEXPAND

If you want to know about the future of artificial intelligence then you must read documentary filmmaker James Barrat’s new book Our Final… Read…

It’s time to retire the term ‘Technological Singularity.’ The reason, says the Future of Humanity Institute's Stuart Armstrong, is that it has accumulated far too much baggage, including quasi-religious connotations. It's not a good description of what might happen when artificial intelligence matches and then exceeds human capacities, he says. What's more, different people interpret it differently, and it only describes a limited aspect of much broader concept. In its place, Armstrong says we should use a term devised by the computer scientist I. J. Good back in 1967: the “Intelligence explosion.” As Armstrong told io9,

It describes the apparent sudden increase in the intelligence of an artificial system such as an AI. There are several scenarios for this: it could be that the system radically self improves itself, finding that as it becomes more intelligent, it’s easier for it to become more intelligent still. But it could also be that human intelligence clusters pretty close in mindspace, so a slowly improving AI could shoot rapidly across the distance that separates the village idiot from Einstein. Or it could just be that there are strong skill returns to intelligence, so that an entity need only be slightly more intelligent that humans to become vastly more powerful. In all cases, the fate of life on Earth is likely to be shaped mainly by such “super-intelligences”.

Image credit: sakkmesterke/Shutterstock.

6. Longevity Dividend

While many futurists extol radical life extension on humanitarian grounds, few consider the astounding fiscal benefits that are to be had through the advent of anti-aging biotechnologies. The Longevity Dividend, as suggested to me by bioethicist James Hughes of the IEET, is the “assertion by biogerontologists that the savings to society of extending healthy life expectancy with therapies that slow the aging process would far exceed the cost of developing and providing them, or of providing additional years of old age assistance.” Longer healthy life expectancy would reduce medical and nursing expenditures, argues Hughes, while allowing more seniors to remain independent and in the labor force. No doubt, the corporate race toprolong life is heating up in recognition of the tremendous amounts of money to be made — and saved — through preventative medicines.

Google has announced Calico, a new company that will focus on health and well-being. But its ultimate purpose is to radically extend the human… Read…

Biotechnologist Craig Venter — the first scientist to map the human genome and create synthetic life — now wants to dramatically extend the human… Read…

7. Repressive Desublimation

This concept was suggested by our very own Annalee Newitz, editor-in-chief of io9 and author ofScatter, Adapt And Remember. The idea of repressive desublimation was first developed by by political philosopher Herbert Marcuse in his groundbreaking book Eros and Civilization. Newitz says:

It refers to the kind of soft authoritarianism preferred by wealthy, consumer culture societies that want to repress political dissent. In such societies, pop culture encourages people to desublimate or express their desires, whether those are for sex, drugs or violent video games. At the same time, they’re discouraged from questioning corporate and government authorities. As a result, people feel as if they live in a free society even though they may be under constant surveillance and forced to work at mind-numbing jobs. Basically, consumerism and so-called liberal values distract people from social repression.

8. Intelligence Amplification

20 Crucial Terms Every 21st Century Futurist Should KnowSEXPAND

With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human… Read…

Sometimes referred to as IA, this is a specific subset of human enhancement — the augmentation of human intellectual capabilities via technology. “It is often positioned as either a complement to or a competitor to the creation of Artificial Intelligence,” says Ramez Naam. “In reality there is no mutual exclusion between these technologies.” Interestingly, Naam says IA could be a partial solution to the problem of technological unemployment — as a way for humans, or posthumans, to “keep up” with advancing AI and to stay in the loop.

9. Effective Altruism

This is another term suggested by Stuart Armstrong. He describes it as

the application of cost-effectiveness to charity and other altruistic pursuits. Just as some engineering approaches can be thousands of times more effective at solving problems than others, some charities are thousands of time more effective than others, and some altruistic career paths are thousands of times more effective than others. And increased efficiency translates into many more lives saved, many more people given better outcomes and opportunities throughout the world. It is argued that when charity can be made more effective in this way, it is a moral duty to do so: inefficiency is akin to letting people die.

10. Moral Enhancement

On a somewhat related note, James Hughes says moral enhancement is another must-know term for futurists of the 21st Century. Also known as virtue engineering, it’s the use of drugs and wearable or implanted devices to enhance self-control, empathy, fairness, mindfulness, intelligence and spiritual experiences.

11. Proactionary Principle

This one comes via Max More, president and CEO of the Alcor Life Extension Foundation. It’s an interesting and obverse take on the precautionary principle. “Our freedom to innovate technologically is highly valuable — even critical — to humanity,” he told io9. “This implies several imperatives when restrictive measures are proposed: Assess risks and opportunities according to available science, not popular perception. Account for both the costs of the restrictions themselves, and those of opportunities foregone. Favor measures that are proportionate to the probability and magnitude of impacts, and that have a high expectation value. Protect people’s freedom to experiment, innovate, and progress.”

12. Mules

Jamais Cascio suggested this term, though he admits it’s not widely used. Mules are unexpected events — a parallel to Black Swans — that aren’t just outside of our knowledge, but outside of our understanding of how the world works. It’s named after Asimov’s Mule from the Foundation series.

13. Anthropocene

20 Crucial Terms Every 21st Century Futurist Should KnowSEXPAND

Another must-know term submitted by Cascio, described as “the current geologic age, characterized by substantial alterations of ecosystems through human activity.” (Image credit: NASA/NOAA).

14. Eroom’s Law

Unlike Moore’s Law, where things are speeding up, Eroom’s Law describes — at least in the pharmaceutical industry — things that are slowing down (which is why it’s Moore’s Law spelled backwards). Ramez Naam says the rate of new drugs developed per dollar spent by the industry has dropped by roughly a factor of 100 over the last 60 years. “Many reasons are proposed for this, including over-regulation, the plucking of low-hanging fruit, diminishing returns of understanding more and more complex systems, and so on,” he told io9.

15. Evolvability Risk

Natasha Vita-More describes this as the ability of a species to produce variants more apt or powerful than those currently existing within a species:

One way of looking at evolvability is to consider any system — a society or culture, for example, that has evolvable characteristics. Incidentally, it seems that today’s culture is more emergent and mutable than physiological changes occurring in human biology. In the course of a few thousand years, human tools, language, and culture have evolved manifold. The use of tools within a culture has been shaped by the culture and shows observable evolvability-from stones to computers-while human physiology has remained nearly the same.

16. Artificial Wombs

Artificial wombs are a staple of science fiction, but could we really build one? As time passes, we’re inching closer and closer to the day when it… Read…

"This is any device, whether biological or technological, that allows humans to reproduce without using a woman’s uterus,” says Annalee Newitz. Sometimes called a “uterine replicator,” she says these devices would liberate women from the biological difficulties of pregnancy, and free the very act of reproduction from traditional male-female pairings. “Artificial wombs might develop alongside social structures that support families with more than two parents, as well as gay marriage,” says Newitz.

17. Whole Brain Emulations

Whole brain emulations, says Stuart Armstrong, are human brains that have been copied into a computer, and that are then run according to the laws of physics, aiming to reproduce the behaviour of human minds within a digital form. As he told io9,

These days, people worry about robots stealing our jobs. But maybe we should be more concerned about massive populations of computerized human… Read…

They are dependent on certain (mild) assumptions on how the brain works, and requires certain enabling technologies, such as scanning devices to make the original brain model, good understanding of biochemistry to run it properly, and sufficiently powerful computers to run it in the first place. There are plausible technology paths that could allow such emulations around 2070 or so, with some large uncertainties. If such emulations are developed, they would revolutionise health, society and economics. For instance, allowing people to survive in digital form, and creating the possibility of “copyable human capital”: skilled, trained and effective workers that can be copied as needed to serve any business purpose.

Armstrong says this also raises great concern over wages, and over the eventual deletion of such copies.

18. Weak AI

What will happen in the days after the birth of the first true artificial intelligence? If things continue apace, this could prove to be the most… Read…

Ramez Naam says this term has gone somewhat out of favor, but it’s still a very important one. It refers to the vast majority of all ‘artificial intelligence’ work that produces useful pattern matching or information processing capabilities, but with no bearing on creating a self-aware sentient being. “Google Search, IBM’s Watson, self-driving cars, autonomous drones, face recognition, some medical diagnostics, and algorithmic stock market traders are all examples of ‘weak AI’,” says Naam. “The large majority of all commercial and research work in AI, machine learning, and related fields is in ‘weak AI’.”

Naam argues that this trend — and the motivations for it — is one of the arguments for the Singularity being further than it appears.

19. Neural Coupling

In what might be the first documented case of technologically-assisted interspecies telepathy, an international team of researchers has successfully… Read…

Imagine the fantastic prospect of creating interfaces that connect the brains of two (or more) humans. Already today, scientists have created interfaces that allow humans to move the limb — or in this case, the tail — of another animal. At first, these technologies will be used for therapeutic purposes; they could be used to help people relearn how to use previously paralyzed limbs. More radically, it could eventually be used for recreational purposes. Humans could voluntarily couple themselves and move each other’s body parts.

20. Computational Overhang

This refers to any situation in which new algorithms can suddenly and dramatically exploit existing computational power far more efficiently than before. This is likely to happen when tons of computational power remains untapped, and when previously used algorithms were suboptimal. This is an important concept as far as the development of AGI (artificial general intelligence) is concerned. As noted by Less Wrong, it

signifies a situation where it becomes possible to create AGIs that can be run using only a small fraction of the easily available hardware resources. This could lead to an intelligence explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run on countless computers. This could make AGIs much more powerful than before, and present an existential risk.

Luke Muehlhauser from the Machine Intelligence Research Institute (MIRI) describes it this way:

Suppose that computing power continues to double according to Moore’s law, but figuring out the algorithms for human-like general intelligence proves to be fiendishly difficult. When the software for general intelligence is finally realized, there could exist a ‘computing overhang’: tremendous amounts of cheap computing power available to run [AIs]. AIs could be copied across the hardware base, causing the AI population to quickly surpass the human population.


I’m sure we missed many must-know terms. Please add your own suggestions to comments.

Source

Anti Facial Recognition Visor

Anti Facial Recognition Visor

Interesting approach to avoid identification from cameras by lighting key areas of the face (video embedded below, via the great DigInfo):

This is the world’s first pair of glasses which prevent facial recognition by cameras. They are currently under development by Japan’s National Institute of Informatics.

Photos taken without people’s knowledge can violate privacy. For example, photos may be posted online, along with metadata including the time and location. But by wearing this device, you can stop your privacy from being infringed in such ways.

"You can try wearing sunglasses. But sunglasses alone can’t prevent face detection. Because face detection uses features like the eyes and nose, it’s hard to prevent just by concealing your eyes. This is the privacy visor I have developed, which uses 11 near-infrared LEDs. I’m switching it on now. It prevents face detection, like this."

"Light from these near-infrared LEDs can’t be seen by the human eye, but when it passes through a camera’s imaging device, it appears bright. The LEDs are installed in these locations because, a feature of face detection is, the eyes and part of the nose appear dark, while another part of the nose appears bright. So, by placing light sources mostly near dark parts of the face, we’ve succeeded in canceling face detection characteristics, making face detection fail."

Compared with previous ways of physically hiding the face, this technology can protect privacy without obstructing communication, as all users need to do is wear a pair of glasses.

More Here

3D Printing system that can create forms without the hindrance of gravity

MATAERIAL

A 3D Printing system that can create forms without the hindrance of gravity - video embedded below:

A brand new method of additive manufacturing. This patent-pending method allows for creating 3D objects on any given working surface independently of its inclination and smoothness, and without a need of additional support structures. Conventional methods of additive manufacturing have been affected both by gravity and printing environment: creation of 3D objects on irregular, or non-horizontal surfaces has so far been treated as impossible . By using innovative extrusion technology we are now able to neutralize the effect of gravity during the course of the printing process. This method gives us a flexibility to create truly natural objects by making 3D curves instead of 2D layers. Unlike 2D layers that are ignorant to the structure of the object, the 3D curves can follow exact stress lines of a custom shape. Finally, our new out of the box printing method can help manufacture structures of almost any size and shape.

More at the project’s website here

Google’s SCHAFT Takes Home Gold in DARPA Robot Olympics

The DARPA Robotics Challenge Trials 2013 completed this weekend in Florida with 16 teams all vying for the top prize of $2 million dollars. 

According to DARPA, “The DRC is a competition of robot systems and software teams vying to develop robots capable of assisting humans in responding to natural and man-made disasters. Technologies resulting from the DRC will transform the field of robotics and catapult forward development of robots featuring task-level autonomy that can operate in the hazardous, degraded conditions common in disaster zones”.

All 16 robots were required to complete eight tasks as part of the challenge:

  • Task 1: Drive a Vehicle
  • Task 2: Walk on a mixed Terrain
  • Task 3: Climb a ladder
  • Task 4: Remove Debris
  • Task 5: Open and walk through doors
  • Task 6: Cut through a Wall
  • Task 7: Open a series of Valves
  • Task 8: Connect a Hose

Gizmodo reports that “with 27 out of a possible 32 points in eight challenges, SCHAFT pulled out a decisive victory”. 

SCHAFT is a 4ft 11 two-legged robot that was developed by a spin-off from the University of Tokyo’s Jouhou System Kougaku lab, which Google recently revealed it had acquired.

Source

Robotic Animals

Black Phoenix is a fictional military corporation that manufactures robots in a not-so-distant future. The idea is creating an album that would be full of designs that could represent a whole line of products from utility and semi-civilian drones to multi-purpose mobile weaponry systems and vehicles.

“Black Phoenix Project” is a collaboration with photographer Maria Skotnikova who is responsible for creating HDR Environment Maps that I used as lighting source as well as backplates. Visit Maria’s website here.

The images below represent “10 Days of Mech” session. The goal during this exercise was to create 1 mech design every day in 3d, from start to finish, without creating preliminary 2d sketches, during non-stop 10 days period. The first 8 designs followed this rule and the 9th design “Ambulance Mech” took 2 days as I wanted to show “an open cockpit” version of it, which took an extra day. So after the exercise was over I decided to make an extra design ( with another 2 days) as a bonus entry just to make it to “10” as a total number of robots.

Before starting this exercise I spent some R&D time establishing the overall workflow for speed-modeling and tried different techniques that enabled me to accelerate design process in 3d. The workflow included re-using premade kit-bash parts, graphics/decals, non-subdivision based concept modeling and image-based lighting for the final rendering. Click here to read more about the work-flow. Click here to visit the online-store where you can purchase original kit-bash sets that were used for the “Black Phoenix” Project designs.

Source

Walmart’s virtual stores at Canadian bus stops

walmart-busstop
Wal-Mart Canada have launched digital signs at bus stops where customers can use their mobile devices to scan products on posters and have the goods delivered to their homes for free. The campaign will last four weeks. Since consumers are typically pressed for time, this is one way of adding value, Simon Rodrigue, vice president of e-commerce for Wal-Mart Canada, said in a statement. “This campaign allows us to help Torontonians shop for essentials on the go, anywhere, at any time.”

“What we used to have before is, here is something we have on sale; please come to our store and buy it. Now what we’re saying is we have this product on sale; buy it right this instant,” says David Elsner, manager, retail consulting services at PwC. “That cash register is in their hand. They can make that purchase.”

Source

Subway Virtual Grocery Store

Virtual supermarkets are popping up in subway stations in South Korea, where commuters can virtually shop for items while waiting for the train to come. Customers simply scan an item’s QR code using the free "Homeplus" app and can have it delivered to their doorstep before they even get home. Ranked as the 2nd most hard-working country in the world to Japan, South Korea is rewarding its workers with this timesaving gem.

(Source: travel.spotcoolstuff.com)

Source

IBM reveals its top five innovation predictions for the next five years

IBM reveals its top five innovation predictions for the next five years

IBM revealed its predictions for five big innovations that will change our lives within five years.

Bernie Meyerson, the vice president of innovation at IBM.

The IBM “5 in 5″ is the eighth year in a row that IBM has made predictions about technology, and this year’s prognostications are sure to get people talking. We discussed them with Bernie Meyerson, the vice president of innovation at IBM, and he told us that the goal of the predictions is to better marshal the company’s resources in order to make them come true.

“We try to get a sense of where the world is going because that focuses where we put our efforts,” Meyerson said. “The harder part is nailing down what you want to focus on. Unless you stick your neck out and say this is where the world is going, it’s hard to you can turn around and say you will get there first. These are seminal shifts. We want to be there, enabling them.”

(See our complete interview with Meyerson here).

In a nutshell, IBM says:

  • The classroom will learn you.
  • Buying local will beat online.
  • Doctors will use your DNA to keep you well.
  • A digital guardian will protect you online.
  • The city will help you live in it.

Meyerson said that this year’s ideas are based on the fact that everything will learn. Machines will learn about us, reason, and engage in a much more natural and personalized way. IBM can already figure out your personality by deciphering 200 of your tweets, and its capability to read your wishes will only get better. The innovations are being enabled by cloud computing, big data analytics (the company recently formed its own customer-focused big data analytics lab), and adaptive learning technologies. IBM believes the technologies will be developed with the appropriate safeguards for privacy and security, but each of these predictions raises additional privacy and security issues.

As computers get smarter and more compact, they will be built into more devices that help us do things when we need them done. IBM believes that these breakthroughs in computing will amplify our human abilities. The company came up with the predictions by querying its 220,000 technical people in a bottoms-up fashion and tapping the leadership of its vast research labs in a top-down effort.

Here’s some more detailed description and analysis on the predictions.

In five years, the classroom will learn you.
IBM

In five years, the classroom will learn you to help tailor instruction to your individual needs.

The classroom will learn you

Globally, two out of three adults haven’t gotten the equivalent of a high school education. But IBM believes the classrooms of the future will give educators the tools to learn about every student, providing them with a tailored curriculum from kindergarten to high school.

“Your teacher spends time getting to know you every year,” Meyerson said. “What if they already knew everything about how you learn?”

In the next five years, IBM believes teachers will use “longitudinal data” such as test scores, attendance, and student behavior on electronic learning platforms — and not just the results of aptitude tests. Sophisticated analytics delivered over the cloud will help teachers make decisions about which students are at risk, their roadblocks, and the way to help them. IBM is working on a research project with the Gwinnett County Public Schools in Georgia, the 14th largest school district in the U.S. with 170,000 students. The goal is to increase the district’s graduation rate. And after a $10 billion investment in analytics, IBM believes it can harness big data to help students out.

“You’ll be able to pick up problems like dyslexia instantly,” Meyerson said. “If a child has extraordinary abilities, they can be recognized. With 30 kids in a class, a teacher cannot do it themselves. This doesn’t replace them. It allows them to be far more effective. Right now, the experience in a big box store doesn’t resemble this, but it will get there.”

In five years, buying local will beat online as you get online data at your fingertips in the store.
IBM

In five years, buying local will beat online as you get online data at your fingertips in the store.

Buying local will beat online

Online sales topped $1 trillion worldwide last year, and many physical retailers have gone out of business as they fail to compete on price with the likes of Amazon. But innovations for physical stores will make buying local turn out better. Retailers will use the immediacy of the store and proximity to customers to create experiences that online-only retail can’t replicate. The innovations will bring the power of the Web right to where the shopper can touch it. Retailers could rely on artificial intelligence akin to IBM’s Watson, which played Jeopardy better than many human competitors. The Web can make sales associates smarter, and augmented reality can deliver more information to the store shelves. With these technologies, stores will be able to anticipate what a shopper most wants and needs.

And they won’t have to wait two days for shipping.

“The store will ask if you would like to see a certain camera and have a salesperson meet you in a certain aisle where it is located,” Meyerson said. “The ability to do this painlessly, without the normal hassle of trying to find help, is very powerful.”

This technology will get so good that online retailers are likely to set up retail showrooms to help their own sales.

“It has been physical against online,” Meyerson said. “But in this case, it is combining them. What that enables you to do is that mom-and-pop stores can offer the same services as the big online retailers. The tech they have to serve you is as good as anything in online shopping. It is an interesting evolution but it is coming.”

In five years, doctors will routinely use your DNA to keep you well.
IBM

In five years, doctors will routinely use your DNA to keep you well.

Doctors will use your DNA to keep you well

Global cancer rates are expected to jump by 75 percent by 2030. IBM wants computers to help doctors understand how a tumor affects a patient down to their DNA. They could then figure out what medications will best work against the cancer, and fulfill it with a personalized cancer treatment plan. The hope is that genomic insights will reduce the time it takes to find a treatment down from weeks to minutes.

“The ability to correlate a person’s DNA against the results of treatment with a certain protocol could be a huge breakthrough,” Meyerson said. It’ll be able to scan your DNA and find out if any magic bullet treatments exist that will address your particular ailment.

IBM recently made a breakthrough with a nanomedicine that it can engineer to latch on to fungal cells in the body and attack them by piercing their cell membranes. The fungi won’t be able to adapt to these kinds of physical attacks easily. That sort of advance, where the attack is tailored against particular kinds of cells, will be more common in the future.

In five years, a digital guardian will protect you online.
IBM

In five years, a digital guardian will protect you online.

A digital guardian will protect you online

We have multiple passwords, identifications, and devices than ever before. But security across them is highly fragmented. In 2012, 12 million people were victims of identity fraud in the U.S. In five years, IBM envisions a digital guardian that will become trained to focus on the people and items it’s entrusted with. This smart guardian will sort through contextual, situational, and historical data to verify a person’s identity on different devices. The guardian can learn about a user and make an inference about behavior that is out of the norm and may be the result of someone stealing that person’s identity. With 360 degrees of data about someone, it will be much harder to steal an identity.

“In this case, you don’t look for the signature of an attack,” Meyerson said. “It looks at your behavior with a device and spots something anomalous. It screams when there is something out of the norm.”

In five years, the city will help you live in it.
IBM

In five years, the city will help you live in it.

The city will help you live in it

IBM says that, by 2030, the towns and cities of the developing world will make up 80 percent of urban humanity and by 2050, seven out of every 10 people will be a city dweller. To deal with that growth, the only way cities can manage is to have automation, where smarter cities can understand in real-time how billions of events occur as computers learn to understand what people need, what they like, what they do, and how they move from place to place.

IBM predicts that cities will digest information freely provided by citizens to place resources where they are needed. Mobile devices and social engagement will help citizens strike up a conversation with their city leaders. Such a concept is already in motion in Brazil, where IBM researchers are working with a crowdsourcing tool that people can use to report accessibility problems, via their mobile phones, to help those with disabilities better navigate urban streets.

Of course, as in the upcoming video game Watch Dogs from Ubisoft, a bad guy could hack into the city and use its monitoring systems in nefarious ways. But Meyerson said, “I’d rather have the city linked. Then I can protect it. You have an agent that looks over the city. If some wise guy wants to make the sewage pumps run backwards, the system will shut that down.”

The advantage of the ultraconnected city is that feedback is instantaneous and the city government can be much more responsive.

Source

Prototype Real / Digital Info Interface System

Prototype Real / Digital Info Interface System

Using projection and gestures to create interactive relationship with information - video embedded below:

Fujitsu Laboratories has developed a next generation user interface which can accurately detect the users finger and what it is touching, creating an interactive touchscreen-like system, using objects in the real word.

"We think paper and many other objects could be manipulated by touching them, as with a touchscreen. This system doesn’t use any special hardware; it consists of just a device like an ordinary webcam, plus a commercial projector. Its capabilities are achieved by image processing technology."

Using this technology, information can be imported from a document as data, by selecting the necessary parts with your finger.

More at DigInfo here

RELATED: This is very similar to a concept developed in 1991 called ‘The Digital Desk’ [link]

Robots at Work and Play

Advancements in robotics are continually taking place in the fields of space exploration, health care, public safety, entertainment, defense, and more. These machines — some fully autonomous, some requiring human input — extend our grasp, enhance our capabilities, and travel as our surrogates to places too dangerous or difficult for us to go. Gathered here are recent images of robotic technology at the beginning of the 21st century, including robotic insurgents, NASA’s Juno spacecraft on its way to Jupiter, and a machine inside an archaeological dig in Mexico. [32 photos]

image
Bipedal humanoid robot “Atlas”, primarily developed by the American robotics company Boston Dynamics, is presented to the media during a news conference at the University of Hong Kong, on October 17, 2013. The 6-foot (1.83 m) tall, 330-pound (149.7 kg) robot is made of graded aluminum and titanium and costs HK$ 15 million ($1.93 million). It is capable of a variety of natural movements, including dynamic walking, calisthenics and user programmed behaviors, according to the University of Hong Kong’s press release. (Reuters/Tyrone Siu)

image
French patient Florian Lopes, 22, holds a tree branch with his new bionic hand at the readaptation center of Coubert, southeast of Paris, on June 3, 2013. Lopes lost three fingers in an accident at the end of 2011 and was the first French patient to receive this type of artificial limb, worth 42,000 euros, already used in Scotland or the US. (Thomas Samson/AFP/Getty Images)

image
An MVF-5 Multifunctional Robotic Firefighting System by company Dok-Ing sprays water canon as part of a TIEMS annual conference entitled “Robotics in emergency and crisis management, use of UGVs, from Military and EOD to Civil protection” at the Bouches-du-Rhone Fire Department school (SDIS 13) in Velaux, southern France. (Bertrand Langlois/AFP/Getty Images)

image
A man holds a Telenoid R1 robot during the Innorobo 2013 fair (Innovation Robotics Summit) as companies and research centers present their latest technologies in robotics in Lyon, on March 19, 2013. The Telenoid R1 is designed as a telepresence robot, to serve as a remote presence for a person, such as a grandchild, and allow people to communicate in a more natural setting.(Reuters/Robert Pratta)

image
Two four-legged robots, part of DARPA’s Legged Squad Support System (LS3) program, run through a field during testing. The semi-autonomous LS3 machines are being designed to help carry heavy loads through rugged terrain, interacting with troops in a similar way to a trained pack animal. (DARPA)

image
On October 9, NASA’s Juno spacecraft flew by Earth using the home planet’s gravity to get a boost needed to reach Jupiter. The JunoCam caught this image of Earth, and other instruments were tested to ensure they work as designed during a close planetary encounter. Juno was launched from NASA’s Kennedy Space Center in Florida on August 5, 2011. Juno’s rocket, the Atlas 551, was only capable of giving Juno enough energy or speed to reach the asteroid belt, at which point the Sun’s gravity pulled Juno back toward the inner solar system. The Earth flyby gravity assist increases the spacecraft’s speed to put it on course for arrival at Jupiter on July 4, 2016.(NASA/JPL-Caltech/Malin Space Science Systems)

image
In this October 6, 2013 photo, laser lights outline a robot during a performance at Robot Restaurant in Tokyo.(AP Photo/Jacquelyn Martin)

image
A SWAT robot, a remote-controlled small tank-like vehicle with a shield for officers, is demonstrated for the media in Sanford, Maine, on, April 18, 2013. Howe & Howe Technologies, a Waterboro, Maine company, says their device keeps SWAT teams and other first responders safe in standoffs and while confronting armed suspects. (AP Photo/Robert F. Bukaty)

image
Graduate student Baker Potts handles a prototype robotic eel in a pool inside the engineering building at the University of New Orleans, on October 2, 2012 in New Orleans. The robotic eel might be able to wriggle through dangerous waters with almost no wake, letting it move on little power and with little chance of radar detection as it looks for underwater mines. (AP Photo/Gerald Herbert) 

image
President Barack Obama shakes a robotic hand as he looks at science fair projects in the State Dinning Room of the White House in Washington, D.C., on April 22, 2013. Obama hosted the White House Science Fair and celebrated the student winners of a broad range of science, technology, engineering and math (STEM) competitions from across the country. (Jewel Samad/AFP/Getty Images) 

image
A robotic dragon from the medieval spectacle “The Dragon’s Sting” burns Christmas trees in Furth im Wald, Germany, on January 24, 2013. (AP Photo/dpa/Armin Weigel)

image
A robotic camera platform records Norway’s driver Andreas Mikkelsen and Finnish co-driver Mikko Markkula as the drive their Volkswagen Polo R WRC during the qualifying stage of the FIA World Rally Championship of Italy near Olbia, on the Italian island of Sardinia on June 20, 2013. (Andreas Solaro/AFP/Getty Images)

image
The CEOs of Marathon set and prepare Robotic Moving Targets for use in the Moving Target Technique Limited Objective Experiment 2 at Marine Corps Base Quantico, Virginia, on September 24, 2013. The robots, developed by the Australian company Marathon, present a target the size of an average person, fall over when shot and can simulate average walking and running paces from four to eight miles an hour. The experiment tests the most effective technique and method to engage moving targets with the M-4 carbine and M-27 infantry automatic rifle. (U.S. Marine Corps/Pfc. Eric T. Keenan) 

image
Robots deliver dishes to customers at a Robot Restaurant in Harbin, Heilongjiang province, China, on January 12, 2013. Opened in June 2012, the restaurant has gained fame in using a total of 20 robots, which range in heights of 1.3-1.6 meters (4.27-5.25 ft), to cook meals and deliver dishes. The robots can work continuously for five hours after a two-hour charge, and are able to display over 10 expressions on their faces and say basic welcoming sentences to customers. (Reuters/Sheng Li) 

image
A mobile fish pen system, developed by Lockheed Martin, constantly moves along the ocean’s surface, in waters over 12,000 ft deep, working to solve the potential problems of impacts on water quality or impacts on the seafloor. The system operates by integrating satellite communications, remote sensing data feeds, robotics, motor controls, and command and control and situational awareness software. (PRNewsFoto/Lockheed Martin)

image
A Toshiba decontamination robot, for work inside a nuclear plant, during a demonstration at Toshiba’s technical center in Yokohama, suburban Tokyo, on February 15, 2013. The crawler robot blasts dry ice particles against contaminated floors or walls and will be used for the decontamination in TEPCO’s stricken Fukushima nuclear power plant. (Yoshikazu Tsuno/AFP/Getty Images)

image
Danish scientist Henrik Scharfe (right) poses with his Geminoid-DK robot during its presentation at the National Robotics Olympiad in San Jose, on August 16, 2013. The Geminoid-DK is a tele-operated Android in the geminoid series and is made to appear as an exact copy of its creator, Professor Scharfe. (Reuters/Juan Carlos Ulate)

image
This image provided by NASA is one of a series of still photos documenting the process to release the SpaceX Dragon-2 spacecraft from the International Space Station, on March 26, 2013. The spacecraft, filled with experiments and old supplies, can be seen in the grasp of the Space Station Remote Manipulator System’s robot arm or CanadArm2 after it was undocked from the orbital outpost. The Dragon was scheduled to make a landing in the Pacific Ocean, off the coast of California, later in the day. The moon can be seen at center.(AP Photo/NASA)

image
Zac Vawter, a 31-year-old software engineer from Seattle, Washington, prepares to climb to the 103rd story of the Willis Tower using the world’s first neural-controlled Bionic leg in Chicago, on November 4, 2012. According to the Rehabilitation Institute of Chicago, their Center for Bionic Medicine has worked to develop technology that allows amputees like Vawter to better control prosthetics with their own thoughts. (Reuters/John Gress)

image
Camels ridden by robot jockeys compete during a weekly camel race at the Kuwait Camel Racing club in Kebd, on January 26, 2013. The robots are controlled by trainers, who follow in their vehicles around the track. (Reuters/Stephanie McGehee)

image
NASA’s new Earth-bound rover, GROVER, which stands for both Greenland Rover and Goddard Remotely Operated Vehicle for Exploration and Research, in Summit Camp, the highest spot in Greenland, on May 10, 2013. GROVER is an autonomous, solar-operated robot that carries a ground-penetrating radar to examine the layers of Greenland’s ice sheet. Its findings will help scientists understand how the massive ice sheet gains and loses ice. After loading and testing the rover’s radar and fixing a minor communications glitch, the team began the robot’s tests on the ice on May 8, defying winds of up to 23 mph (37 kph) and temperatures as low as minus 22 F (minus 30 C). (Lora Koenig/NASA Goddard) 

image
Humanoid robot bartender “Carl” gestures to guests at the Robots Bar and Lounge in the eastern German town of Ilmenau, on July 26, 2013. “Carl”, developed and built by mechatronics engineer Ben Schaefer who runs a company for humanoid robots, prepares spirits for the mixing of cocktails and is able to interact with customers in small conversations. (Reuters/Fabrizio Bensch)

image
An X-47B Unmanned Combat Air System demonstrator launches from the aircraft carrier USS George H.W. Bush (CVN 77) after completing its first arrested landing on the flight deck of an aircraft carrier. The landing marks the first time any unmanned aircraft has completed an arrested landing at sea. (U.S. Navy/Mass Communication Specialist 3nd Class Christopher A. Liaghat)

image
Bipedal humanoid robot “Atlas”, primarily developed by the American robotics company Boston Dynamics, practises tai chi during a news conference at the University of Hong Kong, on October 17, 2013. (Reuters/Tyrone Siu)

image
A robot helps passengers to find their way through the baggage claim area of the Geneva International Airport, on June 13, 2013. Geneva airport is using the autonomous robot to accompany travelers to a dozen destinations such as trolleys, ATM, lost luggage room, showers or toilets. (Fabrice Coffrini/AFP/Getty Images) 

image
A view from the front hazcam of NASA’s Mars rover Opportunity, on Sol 3412 (August 29, 2013), still operating, driving across Mars’ surface and collecting data nearly 10 years since its January, 2004 landing. (NASA/JPL) 

image
Kokoro displays the company’s humanoid robot called “Actroid” (left) and its internal workings (center) at Sanrio’s headquarters in Tokyo, on February 7, 2013. (Yoshikazu Tsuno/AFP/Getty Images) 

image
Rosser Pryor, Co-owner and President of Factory Automation Systems, sits next to a new high-performance industrial robot at the company’s Atlanta facility, on January 15, 2013. Pryor, who cut 40 of 100 workers since the recession, says while the company is making more money now and could hire ten people, it is holding back in favor of investing in automation and software.(AP Photo/David Goldman)

image
Chinese inventor Tao Xiangli welds a component of his self-made robot (rear) in the yard of his house in Beijing, on May 15, 2013. Tao, 37, spent about 150,000 yuan (USD 24,407) and more than 11 months to build the robot out of recycled scrap metals and electric wires that he bought from a second-hand market. The robot is 2.1 meters tall and around 480 kilograms (529 lbs) in weight.(Reuters/Suzie Wong)

image
Photographers take photos of Toshiba Corp’s new four-legged robot which the company says is capable of carrying out investigative and recovery work at tsunami-crippled Fukushima Daiichi nuclear power plant during a demonstration at the company’s Yokohama complex in Yokohama, on November 21, 2012. The new tetrapod robot, which is able to walk on uneven surfaces, avoid obstacles and climb stairs, integrates a camera and dosimeter and is able to investigate the condition of nuclear power plants by remote-controlled operation. (Reuters/Yuriko Nakao)

image
A robot used to explore ruins in the entrance of a tunnel in an archaeological section of the Quetzalcoatl Temple near the Pyramid of the Sun at the Teotihuacan archaeological site, about 60 km (37 miles) north of Mexico City, on April 22, 2013. The robot has discovered three ancient chambers in the last stretch of unexplored tunnel at Mexico’s famed Teotihuacan archaeological site, the first robotic discovery of its kind in the Latin American country. Named Tlaloc II after the Aztec god of rain, the robot was first lowered into the depths of the 2,000-year-old tunnel under the Quetzalcoatl Temple to check it was safe for human entry. After months of exploration, the remote-controlled vehicle has relayed back video images to researchers of what appears to be three ancient chambers located under the Mesoamerican city’s pyramid. (Reuters/Henry Romero)

image
An engineer makes an adjustment to the robot “The Incredible Bionic Man” at the Smithsonian National Air and Space Museum in Washington, D.C., on October 17, 2013. The robot is the world’s first-ever functioning bionic man made of prosthetic parts and artificial organ implants. (Reuters/Joshua Roberts)

Inventing Interactive: Interview: Jorge Almeida (Star Trek Into Darkness)

14 OOOii_StarTrek_IntoDarkness_Vengeance_Viewscreen_01

I’m a massive Star Trek fan. So I’m super-excited that Jorge Almeida took some time to discuss his work on Star Trek Into Darkness — for which he was the lead designer of the UI elements. (If you’re paying attention you’ll remember this previous post with Jorge on his work for MI:4 and The Dark Knight Rises).

Q: How did you get involved with Star Trek Into Darkness?

OOOii (pronounced “ooh-wee”) created all of the user interfaces for the first film, so we were brought on to continue our work on the second. I had done some UI work on “Star Trek”, and was asked to take the lead on “Star Trek Into Darkness.” I got a chance to see the movie on Sunday. Just a great ride. I am really proud to have been a part of this film. Hopefully fans will like what we did.

Q: What was your role? Were there a lot of others involved in the design and production? What software did you use?

I was lead designer for OOOii. I oversaw the look and animation style for all of the UI in the film. We had a great team, with major contributions from Blaise Hossain, David Schoneveld, Paul Luna, and Andrew Tomandl. I also need to single out Rudy Vessup, who was my right hand man on this job. Just a fantastic motion graphics artist and a real pro.

Everything we created was done using some combination of Adobe Illustrator, Photoshop, and After Effects. Additional 3d elements were created using Maya.

Q: Was there a general design brief or design direction that you were given? What were your design influences?

For the Enterprise, production already had the full set of interface animations we created from the first film, so we were only responsible for additional UI specific to the story. It was therefore important that I maintain the style and the spirit of what was done in the first film.

Scott Chambliss was the production designer, and I loved what he did with “Star Trek.” The look of that film reminded me of some of Frank Frazetta’s classic Buck Roger’s illustrations. I would always keep that style in mind when designing. I’m also a fan of the classic LCARS interface from “The Next Generation.” While production wasn’t looking for a revision of LCARS, the curved corners and elegance of those interfaces definitely had an influence on my work.

We also had the advantage of having seen the first film and how it was cut. The action often moves quickly, so the UI had to communicate story points clearly and efficiently. When you’re spending days or weeks on a shot, it’s easy to forget that it may only be onscreen for less than two seconds.

Q: Can you describe the work that went into the UI development for the starship Vengeance?

Early on, there was a focus placed on the starship “Vengeance.” They were shooting the Vengeance towards the end of the schedule, but Scott wanted to get a clear direction before production started and other priorities took over. He provided us with some imagery to use as inspiration- most of it pretty abstract, but the shapes definitely felt interstellar. There were many overlapping circles, and cloud-like clusters. They reminded me of some of the space station research I had done. I presented him with ideas and he started to narrow it down from there.

00 OOOii_StarTrek_IntoDarkness_Vengeance_Concept_01

01 OOOii_StarTrek_IntoDarkness_Vengeance_Concept_03

03 OOOii_StarTrek_IntoDarkness_data_concept_05

04 OOOii_StarTrek_IntoDarkness_Vengeance_concept_01

05 OOOii_StarTrek_IntoDarkness_Vengeance_UI_concept_04

The “Vengeance,” like the “Enterprise,” featured 4 sets of monitors that wrap around the top half of the bridge walls and act as a 360º radar monitor. Some of the images Scott had provided us felt like nautical maps, so I kept that in mind when coming up with ideas. Thinking of the monitors as windows of a submarine, I tried to make what was happening outside feel slightly ominous and alive.

02 OOOii_StarTrek_IntoDarkness_Vengeance_UI_concept_06

06 OOOii_StarTrek_IntoDarkness_Vengeance_sketches_14

07 OOOii_StarTrek_IntoDarkness_Vengeance_UI_concept_07

Once we started testing the animations on set, Scott asked us to desaturate them quite a bit so that they would blend in better with the black interior. I really liked the effect. Here are some of the finals (the viewscreen was done in post):

10 OOOii_StarTrek_IntoDarkness_Vengeance_UR_01

10B OOOii_StarTrek_IntoDarkness_Vengeance_UR_02

11 OOOii_StarTrek_IntoDarkness_Vengeance_Final_01

12 OOOii_StarTrek_IntoDarkness_Vengeancee_UI_01

13 OOOii_StarTrek_IntoDarkness_Vengeance_UI_03

14 OOOii_StarTrek_IntoDarkness_Vengeance_Viewscreen_01

Q: There’s some really interesting heads-up display work. What was involved in their design?

All of the heads-up display shots were obviously done during post-production, so we worked under the direction of Visual Effects Supervisor Roger Guyett. We presented our work regularly to Roger and VFX Producer Ron Ames for comments, and eventually they would present our work to JJ.

The entire space jump sequence was definitely a highlight for me. It was obvious from the first edit I saw that this scene was going to be a lot of fun. We were asked to create the UI for the viewscreen, the glass panel display, and for the helmet heads-up display.

My goal with the HUD was to minimize the interface as much as possible. I wanted to frame it around the actors face in a way that didn’t feel too tech. I was trying to make it feel soothing, with a steady pulse- that way the animation had somewhere to go when things get dangerous.

00 OOOii_StarTrek_IntoDarkness_HeadsUp_01

01 OOOii_StarTrek_IntoDarkness_HeadsUp_thumbnail_01

02 OOOii_StarTrek_IntoDarkness_HeadsUp_thumbnail_02

03 OOOii_StarTrek_IntoDarkness_HeadsUp_thumbnail_03

The projected flightpath was something they had as a rough concept in their original edit, so I just took it from there. I had seen some POV video of an olympic luger, and thought it had the right rhythm and movement to use as a starting point for the animation. I showed our 3d artist the videos, as well as some sketches I had done, and he started building elements in Maya. He rendered a variety of frames and I started combining them in photoshop until we came up with a style that production liked.

04 OOOii_StarTrek_IntoDarkness_HeadsUp_concept_01

05 OOOii_StarTrek_IntoDarkness_HeadsUp_concept_02

06 OOOii_StarTrek_IntoDarkness_HeadsUp_concept_03

From there, it was a matter of animating the individual shots. I animated all of the shots using After Effects. I would create the animation, then put together rough comps so Roger and JJ could see the graphics in context. Once approved, I provided the flat HUD graphics as separate passes for ILM so that they could have flexibility when doing final compositing. The whole process went pretty smoothly.

08 OOOii_StarTrek_IntoDarkness_HeadsUp_03

09 OOOii_StarTrek_IntoDarkness_HeadsUp_02

Q: How did you approach the Enterprise viewscreens?

One of the major challenges in post was designing the Enterprise viewscreen interface. There were only one or two viewscreen interfaces in the first film, but in “Star Trek Into Darkness” there were several. The obvious challenge was keeping the look consistent with the rest of the bridge. Like Scott, Roger also wanted to avoid any design that felt too grid-like or text-heavy.

I don’t really have a set process for how I work. Sometimes I draw thumbnails, sometimes I just start throwing elements onto a photoshop or ae file and start mixing and matching. Generally my philosophy is to keep fixing it until it breaks, then take it back a step. I heard Iain McCaig say that in a video once. Made sense to me.

In practical UI, you are trying to give the user an elegant way to make choices. With film UI, I am trying to give the viewer the illusion of choice. I am trying to deliberately direct the viewers eye to whatever story point the director wants revealed at the time he wants it revealed. The job becomes more about illustration, especially in post where we can see how the interface is framed within the shot. We paint a small part of a much bigger picture, and our work needs to visually support what’s on screen so that we don’t disrupt the rhythm of the viewing experience.

One technique that I often use is to design in greyscale (using an adjustment layer). It reduces the composition to it’s basic values so that I can design without being distracted by color. We also often use Adobe bridge to review various concepts and composites at thumbnail size. It’s an easy way to see which designs are the most effective.

The viewscreen for the volcano sequence was one of the first priorities we had, so the developmental process took place with that interface. I began with thumbnail sketches and tried to work out compositions both on paper and in photoshop.

01 OOOii_StarTrek_IntoDarkness_Viewscreen_Volcano_thumbnail_02

02 OOOii_StarTrek_IntoDarkness_Viewscreen_Volcano_thumbnail_01

04 OOOii_StarTrek_IntoDarkness_Viewscreen_Volcano_concept_02

The volcano viewscreen quickly exposed an issue with trying to make the design too nonlinear — that we might lose the distinction between what was being projected on the glass and what was floating behind it. The view screen needed some type of framing to visually attach it to the ship and easily distinguish it from the environment. We had used translucent glass panels as border elements in the first film, so I started enlarging and reconfiguring them to break up the shape of the viewscreen. I then added and rearranged graphic elements within that framework until the interface had a balance between design and functionality that everyone was happy with.

06 OOOii_StarTrek_IntoDarkness_Viewscreen_Volcano_concept_03

07 OOOii_StarTrek_IntoDarkness_Viewscreen_Volcano_04

Once the first couple of viewscreens were approved, the look took off from there. We provided the elements to ILM in separate passes so they could make adjustments and dial in the final composites with Roger and JJ. ILM, as always, did a fantastic job. I couldn’t be happier with how our graphics looked onscreen.

08 OOOii_StarTrek_IntoDarkness_Viewscreen_Volcano_Final_01

09 OOOii_StarTrek_IntoDarkness_Viewscreen_Volcano_Final_02

10 OOOii_StarTrek_IntoDarkness_Viewscreen_01

11 OOOii_StarTrek_IntoDarkness_Viewscreen_Warp_01

Q: Any final thoughts?

“Star Trek Into Darkness” did a lot of shooting in Los Angeles, so I was much closer to this production than I have been on any film in a while. We were on set a lot, so I was reminded first-hand of just what an enormous operation film production is. Multiple sets being built simultaneously. Trees being painted red on one stage, and a giant Starfleet shuttle on the next. I was humbled by the tireless efforts of our producer, Jennifer Simms, as well the playback producer Cindy Jones. They took on many of the headaches of the job and helped facilitate the constant flow of information between our team and production. This is not easy when you’re talking about creative notes one second, detailed technical issues the next, and budget issues in between- all while this giant train is in motion.

I was also reminded of just how much we depend on the playback crew on set to make our animations work within a scene. We’ve worked with Monte and the guys at Cygnet Video for years. Aside from technical issues, they are also responsible for cueing our animations in sync with the actor’s movements. Ultimately, what you see on screen is an elaborate dance between a large number of people both onscreen and off. It’s pretty amazing to watch it all come together so effectively.

Thank you.

Thanks for the interest in our work. Hopefully people enjoy the movie as much as I did.

You can see more of OOOii’s work on their website oooii.comAnd Jorge has posted more of his developmental stuff on his website: jorgeonline.me.

jorgeonline_ST2_galactic_map_02

jorgeonline_ST2_Hallway_01

jorgeonline_ST2_kiosk_02

jorgeonline_ST2_Office_01

jorgeonline_ST2_Warp_Core_01

Source

Kickstarter: Developer kit for the Oculus Rift - the first truly immersive virtual reality headset for video games. Oculus Rift is a new virtual reality (VR) headset designed specifically for video games that will change the way you think about gaming forever. With an incredibly wide field of view, high resolution display, and ultra-low latency head tracking, the Rift provides a truly immersive experience that allows you to step inside your favorite game and explore new worlds like never before.

A Future-Friendly Web

Article: Source

I presented For a Future-Friendly Web, which covered how we as web creators can think and act in a more future-friendly way. Here are the slides, video and notes from my talk:

  • The web is now a lot bigger than what we’ve been used to. There’s more web-enabled devices than ever: from smartphones, dumbphones, e-readers, tablets, netbooks, notebooks, desktops, smart TVs, game consoles and a whole lot more.
  • All of these devices are just the beginning. There’s a whole host of connected devices right around the corner. Disruptions like Google’s Project Glass will continue to redefine our connected world.
  • Because change is so rapid, it would be foolish to say that we can create anything that’s really “future proof”. But just because we can’t predict the future doesn’t mean that there aren’t things we can do to be better-prepared for whatever comes down the pipes.
  • The power of the web is its ubiquity. No native platform or proprietary solution can claim the same level of reach as the web. This ubiquity is becoming increasingly important as more and more devices emerge. The web’s intrinsic inclusiveness is something that should be preserved and embraced.
  • First and foremost we need to create relavent, purposeful content. There’s more stuff than ever demanding our attention, and we as humans only have the capacity to handle so much.
  • People’s capacity for bullshit is rapidly diminishing. If you don’t focus your products and services, your users will do it for you. Tools like Instapaper, Readability, Safari Reader, Ad Block Plus, DVR, Bittorrent, and more allow users to get to the content without the crap that typically goes with it.
  • As Josh Clark eloquently put it, we need to think of our content like water, and get our content ready to go anywhere because it’s going to go everywhere. It’s bigger than the web, native, Facebook, etc. We need to put our content and functionality in front of users wherever they may be.
  • Rethink context. Historically we’ve created assumptions that users are comfortably seated in front of a desktop or laptop with a strong connection, large screen and fast processor. Mobile has shattered those assumptions and now context is a lot more fuzzy. We need to think about the quantitative (screen size, processing power, input methods, etc) and qualitative (user goals, environment, capabilities, etc) aspects of context when designing experiences
  • Invest in content infrastructure. Too often redesigns are like slapping a new coat of paint on an otherwise-condemned building. Content is the foundation with which everything else stands. That means creating context-agnostic APIs and more robust, flexible content management systems that lend themselves to adaptation.
  • Think more responsively. Responsive web design isn’t about creating squishy websites, it’s to create a more optimal experience across an increasing number of contexts. Unfortunately, many people, both proponents and opponents, miss the point.
  • Users don’t care if your site is responsive, a separate mobile site, or even a plain old desktop site. They do care if they can’t accomplish their goals, if the experience takes 30 seconds to load, or if interactions are buggy and broken.
  • Mobile is more than just a small screen. We should take mobile’s constraints and opportunities in mind when designing experiences.
  • Progressive enhancement is becoming increasingly important. Lay a solid semantic foundation, writing mobile-first styles and using feature detection are good techniques to support more web-enabled devices while still optimizing for the best.
  • Entirely separate experiences aren’t scalable in the long run, but building a separate mobile site might be reality. This can be a great opportunity to lay a future-friendly foundation. Don’t wait for the “perfect opportunity” to start taking steps in the right direction.
  • This is going to be difficult, but it’s absolutely necessary. It will require all of us working together like never before, so let’s set aside petty differences and realize that we’re all on the same team trying to figure all this out. Let’s keep learning from each other.

I’m truly honored to have been part of such an amazing conference. I saw a lot of old friends and met a lot of new ones too. I’m already excited for next year’s Mobilism!