Wednesday, December 21, 2011

Three reasons why telepresence robots trump videoconferencing

AppId is over the quota
AppId is over the quota
Summary: Still in the early stages, telepresence robots offer big advantages apart from their cool factor that could generate interest as their price and performance improves.

I’ve been on a robot kick lately, but in this post I’m going to discuss a trend that hits a little closer to home and could have implications for office workers in just a few years: the rise of telepresence robots (aka telebots).

Still in early stages, telebots are mobile machines outfitted with cameras, screens, speakers and microphones to allow remote workers to interact with on-site colleagues. Using a computer remotely, a person steers an avatar robot around an office to attend meetings or a facility to make inspections while bewildering unsuspecting onlookers.

Is this the future of the office workforce? (Credit: Anybots, Inc.)

The game is on to create less clumsy and more socially acceptable robots for the office. New models from companies like Anybots and VGo Communications have hit the marketplace while newcomers Suitable Technologies and iRobot–makers of the Roomba vacuum cleaner– are looking to push the envelope with larger screens that can convey facial expressions and gestures for better two-way communication.

Like high-end corporate telepresence and videoconferencing systems, telebots are expensive with some costing $15,000 and even $40,000. Consequently, they’ve found few corporate applications (e.g., executive speeches) and some wins in both the medical and education fields.

Today, videoconferencing is by far more practical for businesses looking to reduce or eliminate the cost of business travel. Telepresence robots seek the same objective, but offer three big advantages apart from their cool factor that could generate interest as their price and performance improves:

Move beyond the meeting room - This one is pretty obvious, but being part of the action and having the mobility to join a group as it leaves a conference room for a break or wander the hallways for chance encounters with coworkers means a telepresence robot is the only next best thing to actually being there–at least until teleportation arrives.They do a better job of being you - Even life-sized and in HD, you are still represented in 2D on a screen. As a human-controlled robot you take on a 3D construct among the people your interact with, just as if you were there in person. In fact, telebots are taking on a more realistic appearance and becoming anatomically correct.  Someday, you may be sending your android twin to close a deal.Quality and compliance work - Moving away from the office environment, Dr. Brian Glassman, at Purdue University recently commented about a case for augmented telebots on Technology Review: “Many factories need consistent inspection, some of these factories are either hostile (require hard hats) or are remote and time consuming and expensive to travel to. Having telepresence robots which can visually inspect things, (using IR, telescoping video, or using microscopes) will give companies the ability to conduct inspection from HQ and make more frequent inspections (travel time takes away from work time).”

Further reading:

DesignNews: The Dawning of the Office Robot

Technology Review: Telepresence Robots Seek Office Work

e-Discovery Team: On Vacation and Can’t Attend an Important Meeting? Use a Robot Stand-in!

Christopher Jablonski is a freelance technology writer.


View the original article here

Future of hard drives 'settled' until 2015

AppId is over the quota
AppId is over the quota
Summary: Hard disk drive makers plan to forge ahead with heat-assisted magnetic recording (HAMR) technology, putting bit-patterned media on the back burner.

Credit: Seagate Credit: Seagate

For years, hard disk makers argued over a defined road map that would provide the industry with a next gen standard because any further increases in data density would require huge investment.

One camp, led by Seagate, lobbied for heat-assisted magnetic recording (HAMR); others, led by Hitachi GST, called for a move to bit-patterned media.

EE Times reports that the two sides converged on HAMR as their next step.

“There’s a general consensus the huge shift beyond perpendicular is at least three years out, so mainstream [HAMR] products won’t ship until 2014 or 2015,” said Mark Geenen, president of IDEMA, the disk drive trade group and host of Diskcon, an annual industry event that will be held next week in Santa Clara, CA.

Today’s magnetic disk recording techniques (perpendicular) will hit a brick wall in a generation or two when areal density reaches 1-1.5 terabits per square inch. At this point, stored bits get too small to remain stable; a small amount of heat is all it takes to make nano-sized bits flip their magnetization direction.

HAMR technology uses a magnetic recording medium that is more stable at normal temperatures but needs to be heated before data can be written. The challenge is to heat a very small area quickly enough using the right recording materials that can also integrate laser diodes and recording heads. While difficult, sources say it’s easier than the leading alternative–patterning multiple terabits of data uniformly on a platter.

Proponents of bit-patterning have not yet demonstrated how it can be used to mass produce disks and add no more than two dollars to the cost of each disk.

Meanwhile, Japanese disk drive head supplier TDK has already built HAMR heads. According to reports, TDK could potentially manufacture a 2TB 2.5-inch disk drives with 1TB platters using this technology.

But until all the pieces are in place for HAMR, drive makers are expected to use shingle magnetic recording, a variant of perpendicular, to push areal density to or slightly beyond a terabit.

As for bit-patterning, the approach isn’t expected to take hold until HAMR reaches a limit, which could be  2020 or beyond when areal density is measured in multiple terabits, notes EE Times.

The Advanced Storage Technology Consortium (ASTCS) pools resources from 13-members including Hitachi GST, Marvell, and Seagate for R&D efforts that will help make the generational leap beyond perpendicular recording.

At Diskcon, leading researchers will share progress toward HAMR technology.

Sources: EE Times, Channel Register, IEE Spectrum

Related:

A ’stone-like’ optical disc that lasts for millennia
Drive giants plan next gen tech

Christopher Jablonski is a freelance technology writer.


View the original article here

Tuesday, December 20, 2011

Development boosts lithium-ion battery power by 8-fold

AppId is over the quota
AppId is over the quota
Summary: Researchers at Berkeley have developed a new kind of anode polymer can absorb eight times the lithium of current designs.

Lithium-ion batteries are the most common type of rechargeable battery. They are found in laptops, smartphones, and increasingly, in electric cars and smart grids.

Although there are many advantages to lithium ion batteries–they maintain full capacity even after a partial recharge and are considered to be more environmentally safe than other battery technologies–their storage capacity can be improved.

A team of scientists at Berkeley Lab have designed a new kind of anode that can absorb eight times the lithium of current designs, and has maintained its greatly increased energy capacity after over a year of testing and many hundreds of charge-discharge cycles.

“Most of today’s lithium-ion batteries have anodes made of graphite, which is electrically conducting and expands only modestly when housing the ions between its graphene layers. Silicon can store 10 times more – it has by far the highest capacity among lithium-ion storage materials – but it swells to more than three times its volume when fully charged, ” said Gao Liu of Berkeley Lab’s Environmental Energy Technologies Division (EETD).

The swelling quickly breaks the electrical contacts in the anode, so the researchers concentrated on finding other ways to use silicon while maintaining anode conductivity. Through a combination of synthesis, spectroscopy and simulation, the team tailored a polymer that conducts electricity and binds closely to lithium-storing silicon particles, even as they expand to more than three times their volume during charging and then shrink again during discharge.

The new anodes are made from low-cost materials, compatible with standard lithium-battery manufacturing technologies.

The research team reports its findings in Advanced Materials, now available online.

Source: Berkeley Lab News Center

Christopher Jablonski is a freelance technology writer.


View the original article here

LCD screen harvests light to power devices

AppId is over the quota
AppId is over the quota
Summary: Keeping smartphones and laptops charged when there’s no electrical outlet in sight is a perennial challenge. A novel LCD screen developed by UCLA engineers could potentially help solve the problem.

UCLA engineers have developed an LCD screen with built-in photovoltaic polarizers that harvest and recycle energy from ambient light, sunlight, and its own backlight.

The energy-harvesting polarizer, which in technical terms is called a polarizing organic photovoltaic, can potentially boost the function of an LCD by working simultaneously as a polarizer–a photovoltaic device and an ambient light or sunlight photovoltaic panel.

“I believe this is a game-changer invention to improve the efficiency of LCD displays,” said Yang Yang, a professor of materials science at UCLA Engineering and principal investigator on the research. “In addition, these polarizers can also be used as regular solar cells to harvest indoor or outdoor light. So next time you are on the beach, you could charge your iPhone via sunlight.”

LCDs, or liquid crystal displays, shine light through a combination of liquid crystals and polarized glass to produce a visible image, albeit inefficiently. According to the UCLA researchers, a device’s backlight can consume 80 to 90 percent of the device’s power, but as much as 75 percent of the light generated is lost through the polarizers. A polarizing organic photovoltaic LCD could recover much of that lost energy.

Youssry Boutros, program director at Intel Labs, said: “The polarizing organic photovoltaic cell demonstrated by Professor Yang’s research group can potentially harvest 75 percent of the wasted photons from LCD backlight and turn them back into electricity.” Intel supported the research through its Intel Labs Academic Research Office (ARO).

“In the near future, we would like to increase the efficiency of the polarizing organic photovoltaics, and eventually we hope to work with electronic manufacturers to integrate our technology into real products”, Yang said. “We hope this energy-saving LCD will become a mainstream technology in displays.”

Below is a short clip of the UCLA team making the polarizing film using P3HT, an organic polymer widely used in solar cells.

The research is published in the online version of the journal Advanced Materials.

(Source: UCLA)

Christopher Jablonski is a freelance technology writer.


View the original article here

Monday, December 19, 2011

Scientists create wireless network with LED room light

AppId is over the quota
AppId is over the quota
Summary: German researchers have demonstrated how regular LEDs can be turned into an optical WLAN with only a “few additional components.”

Lights are no longer just for lighting up.

Scientists from the Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute (HHI) in Berlin, Germany, have developed a new kind of optical WAN with enough throughput to allow four people in a room to watch a film from the Internet on their laptops, in HD quality.

The technology can potentially be used on both laptops and mobile telephones.

Credit: Fraunhofer HHI Credit: Fraunhofer HHI

The researchers say they’ve achieved a transfer data rate of 100 megabits per second (Mbit/s) without any losses, using LEDs in the ceiling that light up more than ten square meters (90 square feet). This area also marks the radius in which the receiver — a simple photo diode on the laptop — can be placed before it is out of range.

In lab tests, the team pushed speeds even further using red-blue-green-white light LEDs. Those transmitted data at a blistering 800 Mbit/s, setting a record for VLC or visible light communication.

Klaus-Dieter Langer, the project leader said:  “For VLC the sources of light – in this case, white-light LEDs – provide lighting for the room at the same time they transfer information. With the aid of a special component, the modulator, we turn the LEDs off and on in very rapid succession and transfer the information as ones and zeros.”

The system works because the modulation of the light is imperceptible to the human eye. Langer explains: “The diode catches the light, electronics decode the information and translate it into electrical impulses, meaning the language of the computer.“

While rigging a system to turn LEDs into a transfer medium may not require many components, sending data over light waves is not without challenges. The key one is that whenever on object (like a hand) comes between the light and the photo diode the transfer is impaired.

The HHI scientists stress that the optical WAN is not intended to replace other networks, but rather serve as an additional and low-invasive option in environments where radio transmission networks are not desired or not possible, such as hospital surgical rooms.

“Combinations are also possible, such as optical WLAN in one direction and PowerLAN for the return channel. Films can be transferred to the PC like this and also played there, or they can be sent on to another computer,” notes a release.

The scientists will demonstrate how videos are transmitted by light at the International Telecommunications Fair IFA (Internationale Funkausstellung IFA) in Berlin from September 2-7, 2011.

Related:

MIT: built-in motion sensors in devices improve wireless data rates

A wireless radio that can send and receive signals at the same time

‘Microring’ wireless devices could nix wires in homes, offices

Christopher Jablonski is a freelance technology writer.


View the original article here

Dreamforce: UCSF converting science into public benefit

AppId is over the quota
AppId is over the quota
Summary: At Dreamforce 2011, the director of California Institute for Quantitative Biosciences (QB3) shares a vision for the future of medicine based on precision diagnosis, empirical pharmacology, and information technology.

Here’s a sobering thought: Half of those who reach the age of 85 will have Alzheimer’s disease. Currently, there’s no cure, no treatment, and no drug or therapy in the pipeline.

The answer to this problem and other healthcare challenges could lie in a new approach that links the physical sciences – mathematics, physics, chemistry and engineering – with the bio-sciences while adopting the latest trends in the IT industry.

That was the key message from Dr. Regis B. Kelly, Director, California Institute for Quantitative Biosciences (QB3), who spoke yesterday at “Unusual Thinkers: The UCSF Track” at Dreamforce 2011 held at San Francisco’s Moscone Center.

QB3 is an academic consortium consisting of three University of California campuses (UCB, UCSC & UCSF) working together on “converting science into public benefit” so that the promise of personalized medicine, rational drug design, early diagnosis, and reduced healthcare costs may one day be realized.

Healthcare costs are spiraling out of control due to problems in pharmacology, according to Kelly. He presented a slide illustrating how R&D expenditures for the pharmaceutical industry have increased from $50 billion in ‘05 to nearly $70 billion in ‘09, while the number of new drugs to gain FDA approval have steadily declined.

“Using the best science we have we fail 9 our of 10 times, so our basic understanding is lacking,” Kelly said.

Furthermore, all drugs have side affects. This is because when we inhibit one protein with a drug it affects 50-1000 others, according to Kelly.

“The drug industry is in a perfect storm. The number of drugs have dropped by half and the cost is too high for development.”

(Coincidentally, Andy Grove, co-founder and former CEO of Intel spoke about this topic at a QB3 event this week held at Genentech. Grove said that in terms of time and investment, the closest equivalent process in history to the creation of a single drug is the construction of a single pyramid in ancient Egypt.)

So how will medicine evolve over the next 10 years to improve healthcare and reduce costs? Kelly explained that it will by combining precision diagnosis and empirical pharmacology.

Precision diagnosis doesn’t mean your doctor will no longer check your blood pressure and give you a traditional examination. It does, however, consider how individual variations in your genome can have a major impact on how your body specifically responds to disease, drugs, and other therapies.

The idea is to predict exactly how a protein’s function will change if its composition is changed. How a patient will respond to a new therapy should be looked at from a systems perspective, just as engineers do when building models to determine if a circuit will work or how well an airplane will fly.

This will become more practical as genome sequencing gets faster and cheaper, according to Kelly.

As old as human kind, empirical pharmacology is simply experimenting with potential cures until a solution is discovered by accident.  With robotics and molecular diagnostics, we’ll be able to take a genetics approach to pharmacology and, through trial and error, develop the tools to predict biological processes and then develop cells and microorganisms that provide unique resources such as drugs.

One of the issues of taking a quantitative approach to bio-sciences is that you generate vast amounts of data and that’s where Salesforce.com comes into the picture.

Kelly pointed out that the new Salesforce.com headquarters will be located across the street from QB3 in the Mission Bay neighborhood of San Francisco. He said that a meaningful relationship can be forged because the cloud and heterogeneous computing solutions will be essential for the emerging big data problems in biology.

“This is not about getting new robotic systems or algorithms. It’s about figuring out a way to prevent you from asking who you are when you are in your 80s,” Kelly said.

In addition to Dr. Regis Kelly, the Unusual Thinkers (#df11ucsf) track at Dreamforce featured 7 other leading researchers and practitioners at UCSF.

Related:

Dreamforce: ‘Unusual thinkers’ needed for healthcare reform

Dreamforce Event to Feature UCSF Dream Team on Sept. 1

The enterprise opportunity of Big Data: Closing the “clue gap”

Christopher Jablonski is a freelance technology writer.


View the original article here

Sunday, December 18, 2011

Are megacities sustainable?

AppId is over the quota
AppId is over the quota
Summary: The word’s largest urban centers will be responsible for providing food, shelter, and jobs to roughly 8 billion people by 2100. The biggest obstacle may not be adequate resources or technology, but rather management, say experts.

Will megacities be surrounded by parched, lifeless lands? (Credit: ilker canikligil)

Soon after the United Nations declared that the world population has topped 7 billion people, doomsday advocates sounded off about pending shortages of energy, water, and food as optimists turned a blind eye, saying that we’ve heard it all before.

One thing is for certain: more and more people are living in cities, and increasingly in megacities (cities with over 10 million inhabitants). In 1975, there were just three cities that fit the bill: New York, Tokyo and Mexico City. Today, there are at least 20 more. Many, including Shanghai, Jakarta, and São Paulo have reached supercity status (greater than 20 million).

Financial Times editor David Pilling writes about how the character of cities is being rapidly redefined, noting that by 2050 three-quarters of the world’s population will be urban. By 2100, the figure will nudge up to 80%–that’s 8 billion urbanites among the UN’s projected 10 billion souls on earth at the turn of the century.

This raises many questions about the future of urbanization. First that comes to mind: Will human ingenuity march in lockstep with the growth by advancing agriculture, energy and technology to sustain the urban centers of tomorrow? If past performance is any measure of future success, then the answer is a cautious “yes”.

Blame limits to growth on management, not resources

The biggest challenge, say experts, is actually management. It is the leading cause of inadequate housing, transportation systems, pollution control and disaster preparedness.

Researchers at the McKinsey Global Institute have been studying urbanization and found that there is, in theory, no limit set by technology or infrastructure to how big or how fast cities can grow, and problems stemming from rapid city growth are not directly the result of insufficient resources, but rather from poor management:

….the growth of most urban centers is bound by an inability to manage their size in a way that maximizes scale opportunities and minimizes costs. Large urban centers are highly complex, demanding environments that require a long planning horizon and extraordinary managerial skills. Many city governments are simply not prepared to cope with the speed at which their populations are expanding.

McKinsey suggests that there are four principles of effective city management: (1) funding for infrastructure; (2) modern, accountable governance; (3) proper planning horizons that span 1-40 years; and, (4) dedicated policies in critical areas such as affordable housing. At least in the technology department, progress is well underway.

Mass urbanization will bring with it mass digitization

As cities grow larger and more rapidly, new “smart city” technologies will unleash massive streams of data about cities and their residents.  City-scale operating systems are already in development and promise to intelligently monitor and automate traffic lights, air conditioning, water pumps, and other systems that influence the quality of urban life while driving down the costs of operating a city.

New sources of information could also provide the opportunity for cities to improve government services, alleviate poverty and inequality, and empower the poor, according to a report from the Institute of the Future.

To learn more:

An operating system for smart cities

Interview: MIT’s SENSEable City Laboratory

Urban ecosystems will work in tandem with their natural environments

Paris reinterpreted: First place winners of the Living City Design Competition--Daniel Zielinski and Maximilian Zielinski--illustrate how people can thrive in partnership with nature.

Large-scale sustainability projects like Masdar City in the United Arab Emirates and Germany’s “Morgenstadt” serve as models for green urban development. They showcase how cities can obtain power from renewable resources, run quieter with fleets of electric vehicles, and promote low-energy living using smart meters.

It could take decades, but today’s metropolises will gradually restructure using these technologies to reduce carbon emissions and achieve greater harmony with the natural environment.

Leading the charge for an ecologically restorative future is the International Living Future Institute, a non-profit organization that is “raising the bar for true sustainability”. Through its Living Building Challenge, the institute has defined a set of rigorous development standards that exceed every LEED (Leadership in Energy and Environmental Design) certification level, including platinum. To date, there are active programs in the U.S., Canada, and Ireland, and the organization is looking to expand in other countries.

The world in 2100

While speculative, a convincing McKinsey analysis suggest that by 2100, urban-to-urban cross-border migration will be more prevalent than it is today, resulting in an “immense intermingling of ethnicities”. The world will go from a 7,000-language planet to a couple of hundred languages at the most, and the gap between rich and poor should narrow in all places.

Not to be a Debbie Downer, but there’s also the potential for food shortages stemming from extreme droughts, and unemployment can be a significant issue in the face of inexorable growth in automated technologies. On the plus side, average global lifespan will increase to 81 years and hydro energy and renewable energy could be our primary sources of power by 2100 and beyond.

In 1798, Thomas Robert Malthus famously argued that poverty and famine were natural outcomes of population growth and that controls be put in place to evade disaster. It’s been over two hundred years and the population is 7x greater and the specter hasn’t come true. Hopefully, that remains the case for the next 100 years and beyond.

Related:

Massive timeline of future history
The silver lining of a world run amuck by machines
A roadmap for growing prosperity while saving the planet

Christopher Jablonski is a freelance technology writer.


View the original article here