April 30, 2012

IN BRIEF of NewScientist 28 April 2012




Male bowerbirds grow a garden to attract a mate :
Although the males didn't build the ir bowers in locations with abundant S. ellipticum, a year after construction the re were, on average, 40 of the plants near each. Birds with more plants nearby had more berries within their WHAT has green fingers but no hands? The bowerbird, bowers, which Madden has previously fou nd is the best if a new study is to be believed. Males appear to cultivate predictor of a male's mating success. Males may discard plants around the structures they build to attract a mate. shrivelled berri es outside the ir bowers ( Current Bio/ogy, Male spotted bowerbirds (Ptilonorhynchus maculatus) DOl: 10.1016/j.cub.2012.02.057). build structures, or bowers, from twigs before intricately decorating them with abjects to attract a female. One of the males' most desirable decorations is the berry of the Solanum ellipticum plant. The bowerbirds are th us shaping the distribution joah Madden of the University of Exeter, UK, and colleagues studied the distribution of S. ellipticum in an a rea of Queensland, Australia, inhabited by the birds. of the plants in the area- but is it cultivation? Madden acknowledges the results do not imply that the birds intentionally grow the plants. But he points out th at sorne hypotheses faveur similarly unintentional origins for hu man agriculture, suggesting the bowerbirds' activities cou Id just about fa li under th at definition.
 Epigenetic changes linked with ageing :
 
SOME of the gene tic changes associated with ageing may be the re suit of epigenetics-which suggests they could be reversed. Molecules can attach to DNA, enhancing or preventing gene activation without changing the underlying genetic code. Such epigenetic changes are already suspected as factors in psychiatrie disorders, diabetes and cancer. They may also play a role in ageing. Jordana Bell ofKing's College London and colleagues looked at the DNA of 86 sets of twin sisters aged 32 to 8o, and discovered that 490 genes linked with ageing showed signs of epigenetic change through a process called methylation. "These genes were more likely to be methylated in the older th an the younger [sets of] twins," says Bell, suggesting that the epigenetic changes themselves might contribute to ageing (PLoS Genetics, DOl: 10.1371/journal. pgen.1002629 ). The next challenge is to establish when gene methylation occurs. lt can be triggered through lifestyle factors su ch as smoking, and environmental stresses. lt may one day be possible to develop enzymes that can remove the offending molecules from DNA and reverse methylationand sorne aspects of ageing.
 Ci ut-microbe swap flips eating habits : EATEN too many pies? Biarne the microbes in your gut- they may be influencing how much you eat. In 2006, biologists found that the types ofbacteria in the guts of obese rats differed from th ose in non-obese rats. To find out more, Mihai Covasa and his colleagues at the French National Institute for Agricultural Research (INRA) in Paris swapped gut bacteria between obesity-prone and obesity-resistant rats. The obesity-resistant rodents proceeded to eat more and pile on the pounds. They also developed gut hormone levels typical of obesity-prone rodents. The se rats are a good model for human obesity- people, too, are either resistant or vulnerable to the condition. Understanding the gut flora associated with it may offerways to help control food intake, Covasa said this week at the Experimental Biol ogy 2012 meeting in San Diego, California.
  Neutrino no-show spoilsraytheory:
THE failure by an Antarctic telescope to spot neutrinos has knocked down a major theory about the origin ofhigh-energy partiel es known as cosmic rays. Theorists had thought explosive bursts of gamma rays could be behind the co smic rays. So the IceCube telescope had been looking for neutrinos that ought to be produced at the same time. Finding no neutrinos is a se rio us blow because it rules out gamma ray bursts, says principal investigator Francis Halzen at the UniversityofWisconsin, Madison (Nature, DOI:10.1038/ natureuo68). So where do the cos mie rays come from? No body is sure, but attention will now shift to active galactic nuclei powered by supermassive black ho les.

Form : NewScientist 28 April 2012 

STORAGE FOR Private Clouds




ESSENTIAL BUSINESS TECH
EXECUTIVES, PROFESSIONALS & ENTREPRENEURS
STORAGE
FOR Private Clouds
Leverage The Benefits,
Avoid The Pitfalls
Enterprises often see dollar
signs when they consider how
much money they might save
by relying on a cloud provider
for their storage needs.


In theory, at least, the advantages are obvious: an off-site cloud provider offers the necessary infrastructure and management services at a far cheaper price than what it would cost to maintain storage in-house. But when data is stored on the cloud, the fear is that it is anyone’s guess where and on what kind of infrastructure the data is housed. The data leaves the enterprise and is supposed to be accessed as needed, but the enterprise usually has very little control over exactly how and where it is stored. Some cloud services also fail to meet the security, regulatory compliance, and data-availability requirements that an enterprise may have. For better data-management control and security, private cloud storage can serve as an alternative to a public cloud option. Private cloud solutions can also help enterprises remain compliant by guaranteeing that data is stored on dedicated equipment that is not shared with other customers. “Think of a private cloud as a particular application on dedicated resources. Private cloud storage offers an identifiable physical set of resources that are running that application on a specific server, a set of storage arrays, and a segment of a network,” says James Staten, an analyst for Forrester Research (www.forrester .com). “Everything is 100% dedicated to a client and to that particular storage application on a private cloud.” However, enterprises should consider the drawbacks as well as the potential benefits before putting their most important and sensitive data on a private cloud. When looking to store sensitive data on a private cloud, the cloud provider’s services and processes must be closely scrutinized to ensure that security and regulatory compliance requirements can be met. A private cloud solution for storage must also meet business requirements without unnecessary complexity or data-access latency compared to what traditional SAN (storage area network) environments offer. SETTING UP A PRIVATE CLOUD Creating and then managing a storage environment for sensitive, mission- critical, or other important data is usually a very complex undertaking for an enterprise to complete in-house. An advantage that private cloud storage offers is increased agility compared to setting up on-site storage. Besides the costs savings, a viable provider should be able to offer the capacity and capabilities required on demand. “When a CEO previously asked a CIO how long it would take to [set up] a storage system for [mission-critical data], the CIO might have said, ‘That will be three months and $3 million and I will call you when it’s done,’” says Gene Ruth, an analyst for Gartner (www.gartner.com). “With private cloud storage, the CIO just says, ‘No problem, it will be ready for you in an hour.’ That’s what private storage is about: agility and provisioning on demand.” However, the pace at which service providers can set up a private cloud storage environment is immaterial if it cannot meet an enterprise’s specific requirements and needs. A particular private cloud offering may seem like an attractive option, but it will fail if it interferes with business processes by not functioning properly. “It is important to ensure that the business logic and the data are kept close to each other,” says Clive Longbottom, founder of and analyst for Quocirca (www .quocirca.com). “Latency, for example, can kill the system with slow response and data drop outs.” SECURITY, AUDITS & SLEEPING WELL AT NIGHT Enterprises are usually hesitant about storing their sensitive and mission-critical data on the cloud, and for good reason, considering the compliance risks inherent in relinquishing control of data to a third party without much influence over where and how it is stored. But the beauty of the private cloud is that enterprises know exactly how their data is stored and that their data is better protected than it would be on a public cloud. “[Private cloud storage] will help on the regulatory front since there are no worries that the data is commingled with data from other companies or [if data is not supposed to be] stored offshore,” says Joe Malec, a fellow at the Information Systems Security Association (www.issa.org). “Some private cloud storage solutions may also help with collecting and managing data from different sources and geographic locations. This helps with organization and management during audit time.” However, the level of security that providers offer can vary. “The devil is in the details. The private cloud storage provider should be treated as a third party consultant/ vendor and the same oversight should exist,” Malec says. “Considerations include how they gain access to the network, the kind of access they have, and how can access be restricted.” Enterprises also need to tread carefully about how much control a service provider has over the data. “You [likely] don’t want [the private cloud storage provider] to have free reign of a company’s environment. But if a company pursues this option, then they need to have a plan for the implementation and oversight of the system and the vendor,” Malec says. time, things may not go very well if the audit finds no effective controls in place to ensure the protection of the environment.” THE RIGHT FIT It can be difficult to compel users to opt for an enterprise’s private cloud for sensitive data storage if it is difficult to access or use for individual purposes. Many users will also be tempted to rely on the many user-friendly data storage alternatives available in the consumer space. The remedy is to take a survey of consumer storage solutions that users know and use for their own purposes and then find a private cloud solution that best matches what they are already accustomed to. “A big mistake is to not ask the users what their experiences are like when they use [consumer] cloud storage solutions,” Staten says. “Enterprises really need to know what experiences they have to match.” Private cloud storage solutions that are at least as easy to use as consumer storage services can entice users to comply with policy when a private cloud needs to be used. Some private cloud data storage environments have interfaces that look and function like a PC hard drive. Other offerings reflect the popular Dropbox usage model. But user adoption of private cloud storage in the enterprise can be too successful. Some users might rely on it more than they should, without realizing the expense involved. According to Gartner, a private cloud storage service that includes automatic disaster recovery and downtime of less than 30 minutes per year can cost as much as $20 per gigabyte per month. Comparatively, a public cloud storage solution might cost $1 per gigabyte per month and only offer one-hour disaster recovery and suffer from disruptions of up to 48 hours per year, according to Gartner. The allocation of private cloud storage should thus be carefully managed. “We see tons of people putting way more files than they should on a private cloud even though it is much more expensive than a public cloud,” says Max Haskvitz, operations manager for eRacks (www.eracks.com). “There are also cases where executives might save all of their data to a private cloud storage network unnecessarily while all of the other employees’ data is automatically stored on a public cloud, including sensitive data that should be on a private cloud.” How should businesses address this problem? “I would prepare a basic map of data that you plan to put up on the cloud,” says Haskvitz, “so you have a visual of your storage and structure needs.” NO CONNECTION, NO SERVICE One of the cloud’s inherent risks is that it requires an Internet connection to function. Companies often need 24/7 access to critical data, so it’s important to ensure accessibility for critical data stored on a private cloud. One way to help ensure that connectivity will never be lost is to have two additional high-speed connections between the cloud network and the enterprise as a backup in case the main connection fails. “Someone can put a backhoe through the cable, or it could be dug up by someone wanting to try and reclaim the scrap value of it,” Longbottom says. “There is always a need to ensure that there is a redundancy of connection.” Employing on-premise NAS (network- attached storage) systems can offer an additional layer of protection in case of a disruption. “Having a duplicate copy of your important storage data on a private cloud network as well onsite on a NAS system is [highly advisable],” Haskvitz says. “Let’s say your private cloud connection gets interrupted for whatever reason, even for 30 minutes—that can really [cause problems] with a sale.” Maintaining duplicate copies of all private cloud data on a NAS system is a sound practice as well, even if the data is not mission-critical, Haskvitz says. “It never hurts to have an onsite NAS solution just to have yet another fallback for your data if the Internet goes down, the cloud access is interrupted, or you are in a sensitive setting that is not allowing outside access.”


From PC Today / May 2012

April 18, 2012

A sneak peek at Windows 8





Microsoft's latest operating system is now readyforthe world to explore firsthand. Before you decide whether or not to download it, check out our first impressions. BY MICHAEL MUCHMORE he tablet- and touch-centric operating system would seem to be a hard sell to users of good old PCs, but Microsoft claims the re' s no need for the "tyranny of or"-Windows 8 can serve both ta blet and desktop us ers without compromises. Th at' s the party line, anyway. We'll get a better idea of wh ether the general public agrees after this Consumer Preview is more widely adopted. Microsoft's mission with Windows 8, whose final version is expected by the end of the year, is not an easy one to pull off: Crea ting an opera ting system that works equally weil on both a touch tablet and a traditional PC with keyboard and mouse. Apple, by contrast, has expanded iOS for tablet duty, as weil. Microsoft' s take is that Windows 8 will deliver a fullpower OS, without compromises, for both types of users.HOW IT COMPARES TO OS X LION, CHROME In sorne ways, Windows 8 resembles OS X Lionand Mountain Lion- more than iOS, with swiping to switch between apps and a fully accessible filefolder system. Where Apple has migrated features from its mobile OS to its desktop OS, Microsoft has created a hybrid that should be comfortable in bath settings, and though Windows 8 lacks the final polish and sturdiness of iOS, Microsoft has made admirable progress toward that goal. An even doser comparison might be Google's Chrome OS (remember that?), except that, with Windows 8, you don'tjust get the Web-app-like Metro apps, but also the full body of Windows apps, too. And unlike Chrome OS, on which everything lives in the cloud, Windows 8 gives you both the cloud and powerfullocal apps and accessible storage. For this hands-on report, I used the same Samsung ta blet that was handed out at Microsoft's Build Conference last September. My first quick impressions are that it does an even better job of smoothing out the transition between the Windows Phone-like Metro tile interface and the more traditional Windows desktop mode that will be more familiar to longtime Windows users. It also makes even smarter use of touch gestures. SETTING UP, SIGNING IN When you first run the Windows 8 Consumer Preview, you need togo through a four-step setup-Personalize, Wireless setup, Settings, and Sign in. Each step is very simple and uncluttered. Next cornes signing in. In order to download apps from the Windows Store and take advantage of the SkyDrive cloud service that stores files and photos and syncs your settings
" WhereApple bas migrated features fromits mobileOSto itsdesktop OS, Microsoft hascreated a hybrid that shouldbe comfortable inboth settings. "
with other machines, you need to sign in with a Windows Live ID. After this, you finally get your first look at the Windows 8 Metro start screen! This gridlike display ofbrightly colored rectangular "live tiles" is where you launch any apps, control settings, and enter the more traditional Windows desktop. After a shutdown and restart, you'll see the lock screen ( which will be familiar to any smart phone user). On this you can see battery charge, Wi-Fi signal strength, and notifications for e-mail and any other apps you've allowed. A new type of notification for Consumer Preview is the "toast" that pops in from the upper right if, for example, you have an incoming instant message. The new preview also adds the ability to boot from a USB stick or other external deviee or dise. PICTURE PASSWORD There's a new way to get past this informative lock screen-the picture password. I was a little surprised that the setup process didn't allow me create a picture password, since Microsoft has talked about this feature a lot in conferences and on the Building Windows 8 blog. It's a elever feature that saves you from having to type on your touch screen. To crea te a picture password, Tap Settings, then More PC Setting, and choose Users. From here, you can not only create the picture password, but also switch to a local account (without SkyDrive benefits), change your regular password, or crea te a 4-digit PIN that lets you quickly start, muchas you can with iOS deviees. The first step is to actually choose your picture. Something with several objects and shapes is best. Y ou then sim ply draw any combination of three circles, taps, or lines. Y ou then repeat the pattern to confirm it, and, voilà. The first time I tried to sign in, my "password" wasn't accepted, but it soon became second nature. The feature shows how deeply Microsoft has been thinking about touch interfaces, letting you log in with gestures rather than character entry. And for those worried about security, Microsoft has done the analysis that shows there are over a billion possible gesture combinations for this type of password. NEW SWIPE GESTURES A key Windows 8 concept for touch input is that the si des of the screen are for Windows, while the top and bottom are for the a pp you're running. Swipe in from the right side, and you'll see the Windows 8 "Charms" -or icons that give access to basic OS functions, including Search, Share, Start, Deviees, and Settings. These Charms have been redesigned in the Consumer Preview, with the new Windows logo showing up for the Start choice and the rest getting new polish. Using the mouse, you get to the charms by moving the pointer to the upper-right corner ofyour screen. Swiping from the left edge of the screen switches you to a previous running app, but also lets you pin a sidebar showing the apps content (formattedjust for this space). New in Community Preview is the option to easily swap the large and small views by swiping down from the top and moving the resulting smaller window. Swiping up from the bottom or down from the top opens an app's own menu. Windows 8 offers an advantage over both iOS and Lion-the ability to use a swipe gesture to give a peek at another running app. In iOS, you have to completely switch out of one a pp to take a look at another. The gesture of swiping to show a sidebar populated with a second app works for full-blown Windows desktop apps, too. Semantic zoom is a helpful innovation. By using a pinch gesture on the Start screen, the a pp icons shrink, but not in the simple way you zoom out on a photo; the tiles resize to remain readable, and your groups of tiles stay together, ali visible on one screen. This lets you do things like moving an app's tile from the first to the last page without a lot of scrolling. ENTERING TEXTWITH TOUCH Windows B's on-screen keyboard springs up from the bottom of the screen whenever you touch a text-entry field. It's a very versatile tool, more so than other mobile operating systems' equivalent. Y ou can either use a full keyboard, a split keyboard suited to thumb entry, or stylus a pinch gesture on the Start screen, the a pp icons shrink, but not in the simple way you zoom out on a photo; the tiles resize to remain readable, and your groups of tiles stay together, ali visible on one screen. This lets you do things like moving an app's tile from the first to the last page without a lot of scrolling. ENTERING TEXTWITH TOUCH Windows B's on-screen keyboard springs up from the bottom of the screen whenever you touch a text-entry field. It's a very versatile tool, more so than other mobile operating systems' equivalent. Y ou can either use a full keyboard, a split keyboard suited to thumb entry, or stylus touch-screen keyboard. Now let's look at what you can do with an actual keyboard and meuse. NEW KEYBOARD AND MOUSE FUNCTIONS Microsoft's philosophy for meuse interaction with the OS is that the corners are key. Previous versions of Windows' Start butten were in the lower-left corner, every app's X to close its win dow was in the top right corner, the most important menu item was at the top left, and the Aero Peek butten in Windows 7 is in the lowerright corner. An important improvement to using Windows 8 with a keyboard is that now you can scroll the Metro Start screen's tiles sim ply by nudging the mo use cursor against the right si de of the screen. With Developer Preview, you had to move the cursor dawn to the bottom edge of the screen and grab the scroll bar, or hit Ctrl-Right Arrow. Y ou can still scroll the Start til es with the meuse wheel, which is nice. Fans of keyboard shortcuts won't be disappointed: Windows 8 in eludes a ton of very useful shortcuts, many of which take advantage of the Windows key. Hitting this by itself at any time takes you back to the Metro Start screen, and hitting it again returns you to your running app. The venerable Alt-F4 now closes any kind of Windows 8 app (as does slowly swiping to the bottom of the screen). Of particular interest to the techjournalist is the new screen-capture feature, Windows Key+ PrtScn. A final very useful option is Ctrl-Shift-Esc, which opens the task manager. I'll do a separate article going into more depth on what you can do with keyboard shortcuts in Windows 8. NEWAPPS At the Build Conference launch of Windows 8 Developer Preview, the new operating system launched with a couple dozen Metro apps coded by college interns, in an effort to show how you don't need a PhD in computer science to write for the system. With the Consumer Preview, we get severa} new polished apps programmed by professionals. There are actually far fewer included apps this time, but the Windows Store's grand opening means even more choice. Since the store didn't go live un til February 29th, look for a separa te article detailing that soon, too. The Consumer Preview's included apps are limited to a dozen orso essentials-mail, photos, weather, finance, Maps, People (for social updates), Calendar, Video, Messaging, Photos, Music. Y ou also get a couple gamesgood old Solitaire and a pinball game. This last is connected to Xbox Live, which you're encouraged to get an account with, to sync your gaming on different deviees. I appreciated that the Photos a pp let me view pictures on Flickr, Facebook, and SkyDrive as weil as on the local deviee. Ali of the utilitarian apps are very dean and minimalist, but they still offer most of the features you want. The Mail a pp gave me no problems hooking in a Gmail account and composing messages with attachments. The messaging a pp let me connect through Facebook and Windows Live Messenger, but it's not an SMS replacement like Apple's iMessage, spellcorrection wasn't working, and there was no video chat. The People a pp did a nice job of aggregating my Facebook, Twitter, and Live feeds, but its use of space wasn't very efficient, with each tweet taking the full screen height. A lot of the quibbles are certainly things Microsoft will address before release. NEW FOR THE DESKTOP It's true: The beloved Start button is gone. Oris it? The Start button is still there in the lower-left corner; it just doesn't take up any screen spa ce until you move the mouse there. When you do so, you'll now see a thumbnail view of the Metro Start page- a good visual indicator of where you're going when you click. My only problem with this is that it behaves differently from most Web apps that use a similar interface techniqueinstead of letting you click anywhere on the " TheStart buttonis still there in thelowerleft corner; itjust doesn't takeupany screen space untilyou movethe mouse there. " thumbnail, you'll only be taken to the Start screen if you click with the mo use cursor all the way in the lower-left corner. Despite this detail, the thumbnail on-hover start button is another example of Microsoft's having made the transition between Desktop and Metro views smoother in the Consumer Preview. The Desktop workspace is for what Microsoft folks call "power users," even though it's what every Windows user has been using for the past 20 years. Windows Explorer's new File management tools, complete with rib bon, have been tweaked since the Developer Preview. Now you can hi de the rib bon (just as in Office 2010 ), and there are a bun ch of new file-moving and THE CLOUD CONNECTION SkyDrive is Microsoft's online storage service that offers anyone a free 25GB. The new OS makes SkyDrive cloud storage and synching service available to any Windows 8 a pp that wants to use it and that you allow to use it. In my test ta blet, the SkyDrive a pp itself got a small Start screen tile, and the app's own interface used pages of tiles. This makes sense for touch interface, but l' d like to be able to switch to a more concise list view-there wasn't even a semantic zoom view. But Windows 8's cloud capabilities go way beyond this simple SkyDrive Metro app, and, indeed, you can always hop onto the more powerful Web interface of SkyDrive. The system integrates messaging and sharing throughout, using whatever communication services you've enabled. As with Chrome OS, wh en you sign into any Windows 8 PC, you'll see ali your same personalization, settings, and even Metro apps. DEVICE MANAGEMENT The Deviees charm is accessible by swiping in from the right on a touch screen or moving the mouse cursor to the upper right corner. From here I only saw the multi-monitor setup choice, but heading to the Deviees section of PC Settings let me check for new hardware and connect Bluetooth mice, speakers, keyboards, and the like. It also lets you prevent deviee software from being downloaded wh en you're using a metered mobile connection. When I plugged a USB memory stick into the Windows 8 ta blet, a notification asked me to decide how to handle it, but my only option was to view files in the desktop mode- there was no Metro UI option for dealing with USB memory. I would like to have seen a new tile giving access to the USB memory, at any rate. INTERNETEXPLORER10 IE10 becomes a more integral part of the system with Windows 8, and it offers two guises: the fullscreen Metro view and the more familiar desktop verison. The former follows ali the Metro a pp behaviors. Instead of tabs, you drag down from the top of the screen (or up from the bottom) to reveal your open browser pages in thumbnails along the top. Upon this same gesture, along the bottom appear the standard browser address bar and icons for page reloading and pinning ( which adds the page to your Start screen). Y ou can also unpinch to zoom, and swiping a finger left or right moves you forward or backward in your browsing history. A double tap will also zoom in on the page. Like the iPad's Safari browser, the Metro version of Internet Explorer 10 doesn't support Flash (or other plugins, for that matter), but should you encounter a page that uses those technologies, you can simply switch to the desktop version of IE. The modern replacement for Flash is HTMLs, and there's good news on that front with the IE10 that cornes with Windows 8 Consumer Preview. On the HTMLsTest.com site, which measures the number of HTMLs features, it gets a score of 314. This is up from 301 in the Developer Preview, and a mere 141 for IEg. Chrome, Firefox, and others have recently scored above 300, so it's niceto see the IE is finally in the mix. A wrench icon lets you search within the page or switch to the Desktop browser mode, which is indistinguishable from IEg. Ali these options also appear if you right-click your mo use button. A final helpful touch is the "Clean up tabs" option, which closes ali except your current page. In a very quick and dirty performance test, IE1o posted a 427ms Sunspider result on the 1.6GHx Core is tablet with 4GB RAM. This compares with 259ms on a Core 2 Duo 2.53GHz Windows 7 (32- bit) laptop with 3GB of DDR2 memory and 686 with Google Chrome on the same Samsung ta blet. SOME STABILITY ISSUES (AS EXPECTED) The build of Windows 8 I tested wasn't as final as what's available today for download, and I did run into minor glitches. At one point, the PC settings page stopped responding, instead drawing blue boxes around my choices. At one point I even got a shutdown message with a frowny-face emoticon. The Developer Preview I tested months ago didn't have any similar issues. When I tried to shut down and restart, the accessibility voice started announcing whatever I touched, without performing the action I wanted. And when I was configuring a wireless mouse, the screen switched to portrait orientation, though I was viewing landscape. And the screen would occasionally brighten to full intensity unprovoked. But this is why Microsoft released a preview-to get this kind offeedback and fix it before it goes on sale. A PARADIGM FOR THE FUTURE? I was initially dubious about Windows 8's split personality, but it is making more and more sense tome. For on-the-go Web browsing, Facebooking, emailing, and casual gaming, you've got the touch tablet interface. But you can then plug the same ta blet into a dock, turning it into a full-blown desktop PC, with keyboard, mouse, and even a larger external monitor. And you also have ali those Windows apps you've been using for years. l'rn sure l'rn not al one in that my primary work PC is a docked laptop with a large external monitor. The Windows 8 scenario just takes this a step further in portability. This is far from the end of the story for Windows 8. Now that the Consumer Preview installer software is available, we'll be testing on more machines and running benchmarks and other comparative performance tests. We'll also take deeper dives into the included apps, the Windows Store, and the best third-party apps. And we'll be keeping you informed about Windows 8 till its expected launch later this year. Microsoft is diving into the deep end with this one-size-fits-all tablet and desktop OS, and only time will tell whether it's a strategy that resonates as weil as the more bottom-up iPad system from Apple. And the contrast with Mac OS X Mountain Lion's approach is equally stark, with Apple keeping its desktop and mobile OSes completely "I was initially dubiol1S about Windows 8'ssplit personality, butitis makingmore and more sense tome. PC" separate, while increasing synergy and feature overlap between the two. Windows 8 introduces sorne really innovative touch-input options suited to thumb interactions, and it will benefit the desktop user as weil, with faster startup and better file management. So don't count Microsoft out: Windows 8 is evidence that the old tech company is qui te capable of bold moves and impressive innovation.

PC MAGAZINE DIGITAL EDITION 1 APRIL 2012

April 14, 2012

Bound for the moon




The next rover to roam the moon’s surface
may come not from nasa and its rocket scientists
but from college students and
private companies working on a shoestring
By Michael Belfiore


On a muddy, rubble-strewn field on the banks of the Monongahela River in Pittsburgh, a five-foot-tall pyramidal robot with twin camera eyes slowly rotates on four metal wheels, its electric motors emitting a low whine. In a nearby trailer, students from Carnegie Mellon University huddle around a laptop to watch the world through the robot’s eyes. In the low-resolution grayscale images on the laptop’s screen, the rutted landscape looks a lot like the moon, which is the robot’s ultimate destination. Carnegie Mellon robotics professor William “Red” Whittaker and his students built Red Rover to win the Google Lunar X PRIZE, a competition designed to boost the role of private companies in space and inspire innovation in spaceflight technology. The winning prize is $20 million, which will go to the first nongovernment team that lands a robot on the moon, gets the robot to travel half a mile or so, and sends high-definition video back to Earth—all by the end of 2015. A second-place prize of $5 million, along with bonuses for other achievements such as reaching the site of an Apollo landing, brings the total purse to $30 million. Although 26 teams are competing, Whittaker’s team is a clear leader. His firm, Astrobotic Technology, was the first team to make a down payment on a rocket that will carry its spacecraft and rover to the moon. Whittaker has also proved himself to be a champion builder of autonomous vehicles that can navigate extreme environments. The Google Lunar X PRIZE comes at a major turning point for the U.S. space program. In 2010, following the recommendations of the Review of U.S. Human Space Flight Plans Committee, President Barack Obama directed NASA to encourage privately owned and operated spaceships to replace the retiring space shuttle. With input and seed money from NASA, the reasoning goes, private companies can design and construct ships more quickly and more affordably than the usual big contractors can produce vehicles for the government agency. In the same spirit, the Google Lunar X PRIZE seeks to foster a new class of private planetary missions, one that does not depend on expensive one-off spacecraft and political commitments that may not last beyond one administration. Instead researchers would pay private companies to launch their rovers and instruments. NASA has added its own incentives—an additional $30.1 million, split among six teams for surmounting technical feats that have stumped many government rovers, such as surviving the lunar night. The fate of private spaceflight companies after the Google Lunar X PRIZE is far from certain, and not everyone is convinced that a market exists for their services, but many researchers are excited about the prospect of commercially funded space science. TEST LAUNCH the contest has a precedent in the $10-million Ansari X PRIZE, which ended in 2004, when SpaceShipOne became the first privately manufactured manned vehicle to leave the atmosphere. SpaceShipOne was a rocket plane built by Mojave, Calif.–based Scaled Composites, with funding from Microsoft billionaire Paul Allen. Virgin Galactic is now financing SpaceShipTwo. It has received more than $60 million in deposits from individuals who are willing to pay $200,000 each for the chance to float in microgravity and see Earth from a distance. NASA has contracted Virgin and six other private companies to fly scientific equipment onboard SpaceShipTwo and other spacecraft to conduct experiments on challenges such as transferring fuel without gravity. Now the organizers of the Google Lunar X PRIZE hope to duplicate this success for robotic planetary missions. Few people are as qualified to get a robot on the moon as Red Whittaker. The 63-year-old may have done more than any other individual in developing the discipline of field robotics— taking robots out of controlled environments such as automobile factories and releasing them to do useful work in the wild. In the 1980s he designed and built the robots that explored damaged and dangerously radioactive areas of the partially melted-down Three Mile Island nuclear power plant. As founder and head of the Field Robotics Center at Carnegie Mellon, Whittaker has since made a career of breaking new ground in autonomous vehicles. He has created robots that hunt meteorites in the ice fields of Antarctica and robots that climb into the craters of active volcanoes in Alaska and Antarctica. Whittaker began planning for the Google Lunar X PRIZE in 2007 while in the midst of a different competition: the Defense Advanced Research Projects Agency’s Urban Challenge, held at the former George Air Force Base in Victorville, Calif. Under the team name “Tartan Racing,” Whittaker and his students partnered with General Motors, Continental and other sponsors to create a driverless Chevy Tahoe named “Boss.” Even as he won a first-place victory in the world’s first autonomous vehicle race through city streets, Whittaker wasted no time in finalizing plans for a class at Carnegie Mellon called Advanced Mobile Robot Development. The class’s modest objectives, as described in the course catalogue, are to “detail, analyze and simulate a robotic lunar lander, field-test a lunar rover prototype, tackle enterprise challenges, and communicate mission progress through writing, photography and video.” The course is open to Carnegie Mellon students of any field at any level. Around the same time, Whittaker established Astrobotic Technology as a for-profit company with long-time space entrepreneur David Gump at the helm. Gump aggressively pursues corporate sponsorships and potential customers, whereas Whittaker contributes deep knowledge accumulated over more than 29 years of research at the Field Robotics Center. Among Astrobotic’s sponsors is Pittsburgh-based Alcoa, which has donated the aluminum required for the spacecraft that will carry the rover to the moon. Whittaker, an ex-marine and the son of a chemist and an explosives salesman, says that landing one of his team’s creations on the moon would represent the fulfillment of a career path that has seen his robots on land, water, underwater, underground, and in just about every environmental extreme here on Earth. Winning the moon doesn’t just mean the first prize; in his mind, Astrobotic won’t be successful until it meets every one of the bonus objectives as well. “If you haven’t done everything,” he says, “you haven’t done anything.” ROCKET SCIENCE whittaker’s vision for getting Astrobotic’s spacecraft and rover on the moon begins with the SpaceX Falcon 9 rocket. Established with the goal of dramatically reducing the cost of space access, SpaceX may be the key enabler of the Google Lunar X PRIZE competition. Whittaker believes that the SpaceX rocket will be the vehicle of choice for all the teams in the competition. “As far as I’m aware, every U.S. contender is targeting SpaceX,” he says. Even so, the cost of launch will be the single greatest expense for any team. Though less expensive than other rockets in its class, the published price of a Falcon 9 launch is still $54 million— more than twice the top prize. SpaceX’s competitors are reluctant to discuss their own launch arrangements, but it is clear that SpaceX has already upended the market with the single biggest commercial launch contract in history—a $492-million deal with Iridium, a satellite communications company. After Red Rover leaves Earth’s atmosphere atop its Falcon 9, the Astrobotic spacecraft-and-rover stack will jettison its protective nose fairing, and the rocket’s second-stage engine will push the spacecraft and rover on a course to the moon. The transit will take five days. Guidance, navigation and control software developed at Carnegie Mellon will keep the rocket on the right path. The software is a direct descendant of the code that enabled Tartan Racing to win the Urban Challenge. The computational challenges of autonomous driving and spacecraft piloting are not so different—the same kind of math solves both problems, which is why the software is so similar. The main difference, says Astrobotic team member and Ph.D. candidate Kevin Peterson, is the lack of GPS to guide the vehicle. Instead the craft will plot its trajectory to the moon by referencing stars, the moon and Earth. Once in orbit, the spacecraft and rover must descend to the moon’s surface. In 1969 astronaut Neil Armstrong piloted the lunar module from orbit to a specific location on the moon, while avoiding local hazards such as boulders and craters. But the 250,000-mile distance between our planet and its satellite imposes a time lag that precludes real-time control by a pilot on Earth, so the spacecraft’s software will have to accomplish autonomously what Armstrong did by hand. A primary descent engine will burn to slow the spacecraft down as it approaches the moon, while small thrusters will keep the vehicle stabilized. Touching down two days after lunar dawn, the lander will deploy two ramps (the second is a spare, in case a rock or crater obstructs the first). The bolts that hold the ramps folded against the ladder are rigged to break apart under intense heat. After the ramps fall from the spacecraft to the ground, the rover will roll down one of them to the moon’s surface, binocular eyes scanning the ground ahead. Moon dust is too slippery to permit an accurate reading of distance traveled based on how many times the rover’s wheels have turned. Instead the rover’s onboard computer will calculate distance by comparing the changing appearance of surface features as the robot moves. Radiation- hardened components will protect the computer from the unfiltered solar and cosmic radiation with which the airless moon is bombarded. Back in Pittsburgh, Astrobotic team members at mission control will work 24-hour shifts through the long lunar day, using a steady stream of low-resolution images to guide Red Rover to interesting features (including, it is hoped, an Apollo landing site). The rover will avoid hazards on the moon’s surface autonomously. It will beam high-definition video as blocks of encrypted data, at least one immediately after landing and one later in the mission to meet X PRIZE requirements. The rover will also send e-mail, tweets and Facebook posts. A major technical challenge for the team is making sure Red Rover survives the extremes of lunar day and night, each of which lasts two Earth weeks. During the two-week lunar night, the temperature at the moon’s surface where the team plans to land plummets from a daytime high above 248 degrees Fahrenheit to around −274 degrees F. Any components that contained traces of water, such as the batteries, would suffer irreparable damage as the water froze and expanded. The only rovers ever to have survived the extremes of day and night were the Soviet remote-controlled lunar rovers, called Lunokhods, in the 1970s. They relied on a radioactive polonium isotope to stay warm. But Astrobotic and other private companies competing for the X PRIZE do not have access to these tightly controlled materials. To protect Red Rover from the heat of the sun, carbon-fiber structures surrounding the battery cells conduct heat to the outer surface of the rover. At night, Red Rover will hibernate, and it will awaken with the sun to fire up nonaqueous lithium iron phosphate batteries rigorously tested by then Carnegie Mellon mechanical engineering undergraduate Charles Muñoz. That is the kind of innovation on the cheap that the X PRIZE is meant to inspire. Although Astrobotic stands a good chance of winning the Google Lunar X PRIZE race, it faces steep competition from India and Russia, which are jointly sponsoring a lunar rover, and from China, which is building a rover of its own that will use a radioisotope to stay powered up through the lunar night. If one of these gets to the moon first, the top prize drops to $15 million. COMPETITION whittaker’s team is also expecting strong competition from other X PRIZE participants. Mountain View, Calif.–based Moon Express, with backing from billionaire co-founder Naveen Jain and other wealthy individual investors, may be the best funded of the Google Lunar X PRIZE teams. It entered the fray only in 2010, three years after the contest was announced, so it is lagging behind Astrobotic. But it is overcoming its latecomer disadvantage with a preexisting spacecraft platform developed by NASA. Another contestant is Boulder, Colo.–based Next Giant Leap, headed by former U.S. Air Force pilot-turned-entrepreneur Michael Joyce. Joyce’s company has teamed up with Draper Laboratory (which designed the guidance, navigation and control systems that shepherded the Apollo spacecraft to the moon), a group at the Massachusetts Institute of Technology, and the space systems branch of Sierra Nevada Corporation. It is building a novel “hopping” spacecraft that obviates the need for a separate rover. The craft reignites the thrusters it uses for touchdown to lift off again and travel short distances to areas of interest. The idea seems workable but only if Joyce can raise the necessary funds. The Google Lunar X PRIZE organizers hope that if they build it, the market will come—that developing rovers and getting them on the moon will spur the growth of a new market. Astrobotic, for example, is offering room onboard its spacecraft and rover at the rate of $1.8 million and $2 million per kilogram (2.2 pounds), respectively, plus a $250,000 “integration fee.” For researchers such as University of Maryland physicist Douglas Currie, at least, a guaranteed spot for a fixed price on a commercial mission would be a boon. Currie and his colleagues want to place an array of laserranging retroreflectors on the moon to support measurements that would be 100 times more accurate than can be made with those left by the Apollo astronauts— if only missions become available on which to fly them. Perhaps the most enduring benefit of the X PRIZE will be to inspire the next generation of scientists and engineers. The race has lent an air of real-world excitement to Whittaker’s Advanced Mobile Robot Development course. During the final week of classes in April 2011, members of the Astrobotic structures team scurry about the 3,000-square-foot workshop of Carnegie Mellon’s Planetary Robotics Laboratory, which is entirely dedicated to the moon rover project. They are testing the design for fragmenting metal bolts, an alternative to typical explosive bolts, that unhinge the ramps from the spacecraft so that the rover can explore the lunar surface. Grad student Kanchi Nayaka and a group of undergrads prepare a high-speed video camera on a tripod to record the simulation. The students then throw a switch, and 17.9 seconds later the bolt breaks apart with a bang, and the ramp swings open and falls to the ground, ready for the rover to emerge. “Awesome!” Nayaka says. She steps back from the camera and shoots a grin at a visitor. “You must be good luck!” The most enduring benefit of the Google Lunar X PRIZE may be inspiring the next generation of scientists and engineers.

Scientific American, April 2012

THE LIMITS OF BREATH HOLDING





It’s logical to think that the brain’s need for oxygen is what limits how long people can hold their breath. Logical, but not the whole story By Michael J. Parkes

 TAKE A DEEP BREATH and hold it. You are now engaging in a surprisingly mysterious activity. On average, we humans breathe automatically about 12 times per minute, and this respiratory cycle, along with the beating of our heart, is one of our two vital biological rhythms. The brain adjusts the cadence of breathing to our body’s needs without our conscious effort. Nevertheless, all of us also have the voluntary ability to deliberately hold our breath for short periods. This skill is advantageous when preventing water or dust from entering our lungs, when stabilizing our chests before muscular exertion and when extending how long we can speak without pause. We hold our breath so naturally and casually that it may come as a surprise to learn that fundamental understanding of this ability still eludes science. (Feel free to exhale now, if you haven’t already.) Consider one seemingly straightforward question: What determines how long we can hold our breath? Investigating the problem turns out to be quite difficult. Although all mammals can do it, nobody has found a way to persuade laboratory animals to hold their breath voluntarily for more than a few seconds. Consequently, voluntary breath holding can be studied only in humans. If the brain runs out of oxygen during a lengthy session, then unconsciousness, brain damage and death could quickly follow—dangers that would render many potentially informative experiments unethical. Indeed, some landmark studies from past decades are unrepeatable today because they would violate the safety guidelines for human subjects. Nevertheless, researchers have found ways to begin answering the questions surrounding breath holding. Beyond illuminating human physiology, their discoveries might eventually help save lives both in medicine and in law enforcement. DETERMINING THE BREAK POINT in 1959 physiologist Hermann Rahn of the University at Buffalo School of Medicine used a combination of unusual methods—slowing his metabolism, hyperventilating, filling his lungs with pure oxygen, and more—to hold his breath for almost 14 minutes. Similarly, Edward Schneider, a pioneer of breath-holding research at the Army Technical School of Aviation Medicine at Mitchel Field, N.Y., and, later, Wesleyan University, described a subject lasting for 15 minutes and 13 seconds under comparable conditions in the 1930s. Still, studies and daily experience suggest that most of us, after inflating our lungs maximally with room air, cannot hold that breath for more than about one minute. Why not longer? The lungs alone should contain enough oxygen to sustain us for about four minutes, yet few people can hold their breath for even close to that long without practice. In the same vein, carbon dioxide (the exhaled waste product made by cells as they consume food and oxygen) does not accumulate to toxic levels in the blood quickly enough to explain the one-minute limit. When immersed in water, people can hold their breath even longer. This extension may stem in part from increased motivation to avoid flooding the lungs with water (it is unclear whether humans possess the classical diving reflex of aquatic mammals and birds that lowers their metabolic rate during breath holding while submerged). But the principle remains true: breath-holding divers feel compelled to draw a breath well before they actually run out of oxygen.

As Schneider observed, “it is practically impossible for a man at sea level to voluntarily hold his breath until he becomes unconscious.” Unconsciousness might occasionally occur under unusual circumstances, such as in extreme diving competitions, and some anecdotes suggest rare cases in which children can hold their breath long enough to pass out, but laboratory studies confirm that normally we adult humans cannot do it. Long before too little oxygen or too much carbon dioxide can hurt the brain, something apparently brings us to the break point (as researchers call it) past which we cannot resist gasping for air. One logical, hypothetical explanation for the break point is that specialized sensors in the body observe physiological changes associated with breath holding and trigger a breath before the brain shuts down. Obvious candidates for such sensors would be ones that watched for lengthy expansions of the lungs and chest or that detected reduced levels of oxygen or elevated levels of carbon dioxide in the blood or the brain. Neither of those ideas appears to hold up, however. The involvement of volume sensors in the lungs appears to have been ruled out by various experiments conducted between the 1960s and the 1990s by Helen R. Harty and John H. Eisele, working independently in Abe Guz’s laboratory at Charing Cross Hospital in London, and by Patrick A. Flume, then at the University of North Carolina at Chapel Hill. Their experiments showed that neither lung-transplant patients, whose nerve connections between lungs and brain were severed, nor patients receiving complete spinal anesthesia, whose chest-muscle sensory receptors were blocked, could hold their breath for abnormally long periods. (It is significant that those anesthesia experiments did not affect the diaphragm muscle, however, for reasons that will become apparent.) Research also seems to exclude the involvement of all the known chemical sensors (chemoreceptors) for oxygen and carbon dioxide. In humans, the only known sensors detecting low blood oxygen levels are in the carotid arteries just underneath the angle of the jaw, which supply blood to the brain. The chemoreceptors detecting raised carbon dioxide levels are in the carotid arteries and in the brain stem, which controls regular breathing and the other autonomic (involuntary) functions. If the oxygen chemoreceptors caused the urgent sensation of break point, then without their feedback, people ought to be able to hold their breath until rendered unconscious. Experiments in Karlman Wasserman’s laboratory at the University of California, Los Angeles, have shown, however, that patients still cannot do so if the nerve connections between chemoreceptors in their carotid arteries and the brain stem are severed. Moreover, if reduced oxygen or elevated carbon dioxide levels alone dictated the break point, then beyond some threshold levels, breath holding should be impossible. Yet numerous studies have shown this not to be the case. It would also be true that after the gas levels triggered a break point, breath holding would remain impossible until the arterial oxygen and carbon dioxide levels returned to normal. But that prediction is not borne out, either, as researchers have casually observed since the early 1900s. In 1954 Ward S. Fowler of the Mayo Clinic described formally how after maximum breath holding, subjects could immediately do it a second time if they inhaled only an asphyxiating gas—and even a third time, despite their blood gas levels becoming progressively worse. Further work has verified that this remarkable repeated breath-holding capability is independent of the number or vol
ume of inhalations of the asphyxiating gas. Indeed, in 1974 John R. Rigg and Moran Campbell, both at McMaster University in Ontario, demonstrated that it persists even when the subjects merely attempt to exhale and inhale with their airway closed. Taken together, all these experiments involving repeated breath-holding maneuvers suggest that the need to draw a breath somehow relates to the muscular act itself and not directly to its gas-exchange functions. When the chest is greatly inflated, its natural tendency is to recoil unless the inspiratory muscles of breathing hold it in the inflated state. So researchers of the break point began to look for answers in the body’s neurological and mechanical controls over these inspiratory breathing muscles. As part of that work, they also wanted to learn whether breath holding involves a voluntary halt of the automatic breathing rhythm that drives these muscles or the prevention of the breathing muscles from expressing this automatic rhythm. UNREPEATABLE EXPERIMENTS the normal rhythm of our breathing can be said to begin when the brain stem sends impulses down our two phrenic nerves to the bowl-shaped diaphragm muscle underneath the lungs, telling it to contract and inflate the lungs. When the impulses stop, the diaphragm relaxes and the lungs deflate. In other words, some rhythmic pattern of neural activity—a central respiratory rhythm—mirrors the cycle of our breaths. In humans it is still technically and ethically impossible to measure this central rhythm directly from the phrenic nerves or from the brain stem. Investigators have devised ways to record the central respiratory rhythm indirectly, however: by monitoring instead the electrical activity in the diaphragm muscle, the pressure in the airway or other changes in the autonomic nervous system, such as the heartbeat rhythm (known as respiratory sinus arrhythmia). Working from such indirect measurements, Emilio Agostoni of the University of Milan in Italy showed in 1963 that he could detect a central respiratory rhythm in human subjects holding their breath well before they reached break point. In related experiments at the University of Birmingham in England in 2003 and 2004, graduate student Hannah E. Cooper, anesthetist Thomas H. Clutton-Brock and I used respiratory sinus arrhythmia to show that the central respiratory rhythm never stops: it persists throughout breath holding. Breath holding must therefore involve suppressing the diaphragm’s expression of this rhythm, possibly through a voluntary, continuous contraction of that muscle. (Various experiments seem to have ruled out the involvement of other muscles and structures involved in normal breathing.) Break point may similarly depend on sensory feedback to the brain from the diaphragm—reflecting, for example, how stretched or unusually overworked it may be. If so, then paralyzing the diaphragm to eliminate its sensory feedback to the brain ought to allow subjects to prolong their breath holding greatly if not indefinitely. Such was the expectation in one of the most alarming breath-holding experiments ever, which Campbell performed at Hammersmith Hospital in London in the late 1960s. Two healthy, conscious volunteers consented to have all their skeletal muscles temporarily paralyzed with intravenous curare—except for one forearm, with which they could signal their wishes. The subjects were kept alive with a mechanical ventilator; breath holding was simulated by switching it off, and the subjects indicated their break point by signaling when they wanted the ventilator restarted. The result was astonishing. Both volunteers were happy to leave the ventilator switched off for at least four minutes, at which point the supervising anesthetist intervened because their blood carbon dioxide levels had risen perilously. After the effects of the curare had worn off, both subjects reported feeling no distressing symptoms of suffocation or discomfort. For obvious reasons, such a daring experiment has rarely been repeated. Some others have tried and failed to replicate Campbell’s findings, but their courageous volunteers reached break point after such a short duration that their carbon dioxide levels barely rose above normal. Those observations suggest that the subjects might have chosen to end the tests early, possibly because of discomfort from the air tubes holding open the glottis (a modern safety requirement not present in Campbell’s experiment) and because of their greater awareness of the lifethreatening risk. Nevertheless, some equally remarkable experiments by Mark I. M. Noble, working in Guz’s laboratory at Charing Cross Hospital in the 1970s, seem to confirm that diaphragm paralysis prolongs breath-holding duration. Instead of total body paralysis, Noble and his colleagues used the much less lifethreatening maneuver of paralyzing the diaphragm alone by anesthetizing only the two phrenic nerves. Doing so doubled subjects’ average breath-holding duration and reduced the usual uncomfortable sensations that accompany breath holding. CURRENT BEST EXPLANATION the balance of evidence thus favors the speculation that a voluntary, lengthy contraction of the diaphragm holds the breath by keeping the chest inflated. The break point may depend very much on stimuli that reach the brain from the diaphragm in this unusual contracted state. During such a lengthy contraction, the brain might subconsciously perceive the unusual signals from the diaphragm as vaguely uncomfortable at first but eventually as intolerable, causing the break point. The automatic rhythm then regains control. This hypothesis is not fully fleshed out, but it fits nicely both with Fowler’s observations (that any release of breath holding, necessarily by relaxing the diaphragm, enabled another one) and with the effects of lung inflation and blood-gas manipulation on breath-holding duration. Relaxing the diaphragm even a bit and exhaling slightly would delay break point by relieving the signals from the stretch sensors in the diaphragm. Raising the oxygen level and lowering the carbon dioxide level in the blood would also extend breath-holding capability by reducing biochemical indicators of fatigue in the diaphragm. Anything that prevents the brain from monitoring such information—for example, by blocking the nerves between the diaphragm and the brain—will extend duration. The tolerance of the brain to such unpleasant signals will also depend on your mood, motivation and ability to be distracted by, say, mental arithmetic. This hypothesis is only the simplest unifying explanation for the experimental observations. Some of these experiments used too few subjects to be the basis for reliable generalizations, and ethical permission to repeat them may never be granted. Key pieces of the jigsaw puzzle may still be missing. Moreover, a puzzle piece that does not yet quite fit comes from another of Noble and Guz’s dramatic (and now ethically unrepeatable) breath-holding experiments. They tripled the duration of breath holding in three healthy subjects by anesthetizing their two sets of cranial nerves (the vagus nerves, which go from the brain to organs in the chest and abdomen, and the glossopharyngeal nerves, which go to the glottis, larynx and other parts of the throat). This result would appear to have been achieved without affecting the diaphragm, except that it is also possible that the vagus nerves, too, carry some signals from the diaphragm. It seems less likely that the larynx itself contains a muscle involved in breath holding: in 1993 when surgeon Martyn Mendelsohn of Sydney, Australia, viewed the glottis (via a camera inserted through a nostril), the glottis often remained open throughout breath holding. This observation, too, seems to support the conjecture that the diaphragm’s role is key. SAVING LIVES better understanding of what limits people’s ability to hold their breath has practical uses in medicine. As part of the treatment for breast cancer, for instance, patients receive radiation therapy, during which the goal is to lethally dose the entire tumor without damaging the healthy tissues all around it. Doing so requires minutes of radiation exposure, during which a patient must try to keep her breast motionless. Because breath holding for so long is impractical, current practice uses short bursts of radiation timed to fall between a patient’s breaths, when her chest is moving least. Yet with each breath, the breast moves and may not necessarily return to exactly the same position. Medical physicist Stuart Green, clinical oncologist Andrea Stevens, anesthetist Clutton-Brock and I are now starting experiments funded by University Hospital Birmingham Charities to test whether it would be feasible to prolong breath holding sufficiently to aid radiotherapy treatment. A practical understanding of breath holding might also be of value to law-enforcement personnel when they are forcibly restraining suspects. Every year around the world some people under restraint may die accidentally. Raising the metabolic rate, compressing the chest, lowering the blood oxygen level and raising the blood carbon dioxide level all shorten the duration of a person’s breath holding. So someone who is angry, has been fighting or is being forcibly held down may well need to draw a breath earlier than someone who is relaxed. In 2000 Andrew R. Cummin and his team at Charing Cross Hospital studied what happened after eight healthy subjects breathed out maximally and held their breath after cycling moderately for one minute: the duration of their maximum breath holding plummeted to 15 seconds, the average amount of oxygen in their blood fell dramatically and two of them developed irregular heartbeats. Consequently, the researchers concluded that the “cessation of breathing for short periods during vigorous restraint . . . may account for unexplained deaths in these circumstances.” Law-enforcement authorities have carefully compiled guidelines for the use of forcible restraint; they should be observed scrupulously. Such investigations of breath holding open windows into vital aspects of human physiology. Clearly, more groundbreaking discoveries, particularly about the diaphragm itself, remain ahead—which leaves some of us breathless in anticipation.

Scientific American, April 2012

April 13, 2012

A Revolution in Product Design





CAD engineers to benefi t from innovations in processors and so ware. BY PETER VARHOL
Engineers engaged in computer-aided design (CAD) can be excused for thinking that workstation performance hasn’t adequately kept up with their needs. Because CAD computations don’t easily lend themselves to parallel computations, the trend over the last decade toward multiple processors and multiple cores per processor doesn’t provide a significant boost to executing CAD applications. There is a strong connection between the clock speed of the processor and the performance of CAD software. However, the design and manufacturing technologies that enabled rapid increases in clock speed during the 1990s began reaching their theoretical limits, and Intel has turned to processor performance improvements using alternative technologies. But users of CAD software from the likes of Autodesk, Siemens PLM, SolidWorks, PTC, and Bentley still have a few secret weapons in the performance race. Intel has provided some innovative processor technologies that can speed up serial applications such as CAD, and a few software partners have taken advantage of these technologies to deliver real solutions to CAD engineers. Autodesk Inventor product suite offers a set of software for 3D mechanical design, product simulation, tooling creation, and design communication. Inventor, as well as SolidWorks, PTC and Siemens PLM all offer integrated tool suites that are intended to help engineers validate their ideas earlier in the design process. Hardware Provides the Performance Foundation While processor clock speed increases have given way to multiple cores, Intel has built a few tricks into its current high-performance processors such as the Intel® Xeon® processor family. One example is Turbo Boost, which provides the ability to dynamically increase the processor performance for periods of time in response to a high demand for performance. Turbo Boost activates when the operating system requests the highest performance state of the processor, delivering a substantially higher clock speed for a serial application than the rated speed of the processor. Another innovation is hyperthreading. A hyperthreaded core has multiple parts of the pipeline—typically control registers or general-purpose registers, allowing the operating system to schedule two threads or processes simultaneously. The result is the processor can hold multiple thread states at once. Hyperthreading makes the context switches that processors normally engage in occur much faster. Intel has also focused on better hardware support for virtualization. Intel Virtualization Technology for Directed I/O enables users to create virtual partitions and concurrently run interactive and batch applications with assured levels of performance. It includes several important capabilities, such as I/O device assignment, DMA remapping, interrupt remapping, and reliability features that prevent memory or virtual machine (VM) corruption. Software Delivers Virtualization Flexibility Software providers Microsoft and Parallels have taken advantage of Intel hardware virtualization technologies to bring better performance using virtual machines. Microsoft Windows HPC Server R2 provides engineering groups with access to affordable and powerful supercomputing resources in the familiar Windows environment. Effectively, it enables clusters of workstations to act as a single HPC cluster, enabling all engineers to share computing resources. Parallels PWE delivers the a highperformance virtualization platform for workstations that gives end users dedicated HPC, graphic and networking resources for both host and guest workstation environments. You may not be able to take good advantage of multiple processor cores to accelerate parallel execution, but today’s workstations and software provide ways to improve engineering processes. Using virtualization,you can test out multiple designs on separate VMs, with each performing at close to the full speed of the CPU. With Microsoft’s HPC Server, you can do so at cluster speeds, without taking a backseat to analysis and simulation jobs. While processor innovations continue to occur, the notion of faster processor clock speeds in the foreseeable future is unlikely. However, companies can rethink the engineering process and leverage the software advancements being made by CAD vendors. These companies are exploring the value of simulation-based design and how these solutions allow companies to employ all the available technology to increase innovation. The question as we start 2011 for the rest of us is—are we going to see how we can change the way we work in order to make the best use of new innovations?

April 12, 2012

Getting to the Heart of Mechanotransduction



Mechanotransduction, the process of converting mechanical stimuli into cellular responses, enables cells to produce signals that regulate a wide range of physiological responses. In the beating heart, for example, the stretching of muscle cells causes the release of chemical signals that regulate heart function, and studies in mice and humans have suggested a connection between faulty stretch-sensing mechanisms and heart disease ( 1). The mechanisms underlying such processes, however, have been unclear. On page 1440 of this issue, Prosser et al. ( 2) use a novel method that involves precisely stretching single heart muscle cells (cardiomyocytes) that have been glued to microscopic glass rods to provide some clarity. They demonstrate that a moderate stretch during the cell’s relaxed state (diastole) can trigger a burst of calcium “sparks.” They also show that this process is defective in a lifethreatening muscle disease. Mechanotransduction has an important role in the myocardium, the heart’s muscular tissue. Each contraction phase (systole) of the cardiac cycle causes sarcomeres (the basic unit of muscle) to shorten; the sarcomeres then lengthen again during diastole. In the early 1900s, European researchers Otto Frank and Ernest Starling showed that an increased length change during diastole produces a stronger contraction in the following systole. To better understand the mechanisms underlying cardiac mechanotransduction, Prosser et al. developed new tools to apply a controlled and moderate stretch (8%) to isolated rat or mice cardiomyocytes during diastole, and then measured intracellular levels of calcium ions (Ca2+) and reactive oxygen species (ROS) before, during, and after stretch. They observed that stretch initiated, within milliseconds, a burst of Ca2+ sparks—highly localized and temporary increases in intracellular Ca2+ concentration—and a nearly instantaneous increase in the rate of ROS production. Both Ca2+ spark generation and ROS production immediately returned to baseline levels after the cell was restored to its initial length. They demonstrated that the stretch-induced burst of Ca2+ sparks requires ROS; introducing an antioxidant molecule prevented the sparks, whereas a mild oxidant enhanced them. Prosser et al. also studied the behavior of single cardiomyocytes from mice that have a genetic mutation that causes a muscle disease similar to human Duchenne muscular dystrophy. Myocytes from these mdx mice display greater nicotinamide adenine dinucleotide phosphate (NADPH) oxidase activity— and higher cellular ROS levels—than myocytes from wild-type mice ( 3). Prosser et al. report that moderately stretching mdx cells produced Ca2+ waves instead of sparks; waves are typical responses of abnormal cardiac Ca2+ signaling. Prosser et al. propose that the process they call X-ROS signaling produces these results. Past studies had described increased Ca2+ spark generation in response to stretching of single cardiomyocytes ( 4– 6), and implicated ROS in the enhanced Ca2+ sensitivity of mdx cardiomyocytes ( 3, 7), but the molecular mechanisms were unresolved. Now, using pharmacological and molecular techniques, Prosser et al. show that moderate diastolic stretch activates the enzyme complex NADPH oxidase 2 (NOX2), which they found colocalized with markers for the transverse tubule (T-T) system formed by invaginations of the muscle fi ber’s plasma membrane. NOX2, in turn, directly mediates ROS-dependent Ca2+ spark generation (see the fi gure). The fi nding that dystrophic heart muscle has an excessive X-ROS signaling response advances our knowledge of the mechanisms underlying abnormal Ca2+ signaling in this disease. Prosser et al. also propose that stretchinduced ROS production increases Ca2+ spark generation by sensitizing ryanodine receptor type 2 (RyR2) channels located in the nearby sarcoplasmic reticulum (SR), the intra cellular membrane network that surrounds myofi brils. The SR releases and recaptures Ca2+ in each contraction-relaxation cycle that underlies the heartbeat. The cycle starts with Ca2+ entry into cardiomyocytes through voltage-activated channels located in the T-T system. Next, Ca2+ entry stimulates the opening of RyR2 Ca2+ release channels. This cellular response, known as Ca2+-induced Ca2+ release (CICR), causes muscle contraction. The cycle ends with relaxation, which occurs when intracellular Ca2+ returns to resting levels ( 8). The Ca2+ sensitivity of RyR2 channels is a key feature in CICR regulation. Alterations in RyR2 Ca2+ sensitivity, which is infl uenced by cellular factors and RyR2 redox state ( 9, 10), may underlie subcellular changes in Ca2+ signaling that contribute to disease ( 11). Although researchers reported more than a decade ago that the Ca2+ sensitivity of single RyR2 channels is redox-dependent ( 12), Prosser et al. demonstrate that a Ca2+ spark burst results from the very fast and reversible X-ROS signaling, which requires an intact microtubule network. Previous studies indicated that activation of cardiac NOX2 increases RyR2 S-glutathionylation, a reversible redox modifi cation that enhances RyR2 activity and hence promotes SR Ca2+ release ( 13). Tachycardia (accelerated heart rate) and exercise augment these effects ( 14), suggesting a direct correlation between increased heart activity, NOX2 activation, increased RyR2 S-glutathionylation, and enhanced Ca2+ release. It remains unclear, however, if the Ca2+ spark burst induced by controlled stretch entails RyR2 S-glutathionylation. In addition, the cellular mechanisms that so effi ciently return ROS production and RyR2 activity to baseline levels after stretch remain unclear, as do the molecular mechanisms that enable extremely fast microtubule-dependent NOX2 activation. For example, do angiotensin receptors mediate this response, as they mediate the slow stretch response of the myocardium ( 15, 16)?

April 11, 2012

Healing Kansas




 
Better health requires improved education, more access to nutritious food and greater economic opportunities, new county rankings show As mayor of Kansas City, Kan., Joe Reardon is justifiably proud of the University of Kansas Medical Center, which has trained several generations of physicians and nurses for more than 100 years. After all, the medical center is consistently rated as the best hospital and treatment center in the state, according to a popular ranking of health institutions. So when Mayor Reardon—who heads the government of both the city and Wyandotte County, in which it sits—first learned that Wyandotte had come in dead last among the state’s counties in a rigorous analysis of health measurements in 2009, he was shocked. “We have great access to excellent health care in a state where some counties have essentially no access,” Mayor Reardon says. “And we’re ranked last out of 105 counties? My first reaction was, ‘How could this be?’” The answer, Mayor Reardon discovered as he delved into the statistics behind the claim, is that proximity to fine hospitals and first-rate doctors is only one of many factors—and not always the most important—determining how long people live and how vulnerable they are to serious illness. Evidence collected by public health experts over the past few decades repeatedly shows that less obvious forces, including proper diet and exercise, higher levels of education, good jobs, greater neighborhood safety, and underlying support from family and friends, provide a powerful, and often unappreciated, boost to a community’s health and well-being. By the same token, studies demonstrate, a poor showing in any of these areas can sink the health of individuals or of communities—even if they have access to topflight medical facilities. The goal of the County Health Rankings project, which has given Wyandotte County low marks for health but high praise for its commitmennt to change, is to bring these hidden health factors to light and thereby help elected officials, civic leaders and community groups take concrete steps that can improve the health of local residents. The initiative originated at the University of Wisconsin– Madison, covering solely that state in 2003. A similar project began in Kansas in 2009, and in 2010 the Robert Wood Johnson Foundation in Princeton, N.J., provided funding so that the University of Wisconsin could expand its investigation to include within-state comparisons of counties in all 50 states. Among the biggest lapses identified in Wyandotte County, for example, were much higher than average rates of smoking and obesity, lower than average rates of high school graduation, a distressing number of babies who weigh too little at birth, and a relative scarcity of fresh fruits and vegetables in grocery stores compared with the rest of the state. Mayor Reardon says these measurements have already transformed his approach to budget priorities. Changes include earmarking money for the addition of mentoring programs for high school students, new parks and sidewalks, and the opening of more and better supermarkets and community gardens in impoverished neighborhoods. And that is just the start, Mayor Reardon says. “The measure of our success as a city is not just how many jobs we create but also the health of our citizens.” He believes that potential employers who want to stay competitive in today’s global marketplace are more likely to settle in communities where workers are both highly skilled and relatively healthy. PUBLIC HEALTH STRATEGY HAS DEEP ROOTS The notion that government officials can use public health statistics to improve policy decisions is not new. In 1854 physician John Snow, one of the founders of modern epidemiology, traced a cholera outbreak in the overcrowded London neighborhood of Soho to a contaminated public water pump by noting how many cases of illness clustered around the pump. (The pump was later found to be too close to a leaking cesspool.) Snow convinced officials to disable the pump, which helped to stop the spread of disease. Today’s health statisticians still search for instructive patterns of behavior and illness in communities, although they have moved beyond simply tracking infectious disease rates and deaths. Now adays, says Julie Willems Van Dijk, a researcher at the University of Wisconsin Population Health Institute who helps county leaders figure out what to do with the data, public health officials also monitor quality of life and trends in chronic, noncommunicative disorders, such as depression, diabetes and heart disease. The trick for researchers, Willems Van Dijk says, is to sift information from broad studies of large populations to identify behaviors and other influences on health that can be modified. The next step is to see how those factors play out at the level of the city, county and town, where many of the policy decisions that most directly affect people’s health are often made. Individual cities started enforcing smoking bans in restaurants, Willems Van Dijk notes, after studies showed that secondhand smoke increased the number of heart attacks and cases of asthma in nonsmokers. The County Health Rankings project, now updated annually, is an attempt to provide reliable health statistics on a scale and in a format that public officials can use to take action, such as altering zoning rules to allow for beneficial placement of grocery stores, bike paths and parks. FOUR BROAD CATEGORIES In comparing the counties within each state, Willem Van Dijk and her colleagues at the University of Wisconsin gather no new data. Instead they base their ratings on public information scoured nationwide from various sources, including the National Center for Health Statistics, the FBI and the U.S. Census. Their aim is to identify robust, reliable indicators that are measured the same way from county to county within each state for four broad categories—behavior, clinical care, socioeconomic status and physical environment— that research shows shape health. Within these groupings, some of the most influential factors—such as smoking (behavior)— come as no surprise. Others include education level attained by most of the population (socioeconomic status), the relative number of sexually transmitted diseases diagnosed each year (behavior), and the number of car crashes related to drunk driving (behavior). Researchers analyze a host of patterns in the data to help community leaders spot where improvements are most needed. For example, Wyandotte County scored particularly low on education in 2011. Part of the reason for that result is that just 60 percent of its ninth graders graduated from high school within four years, and only 42 percent of adult residents aged 25 to 44 had spent some time in college. Mayor Reardon hopes the high school internship and mentoring programs he has helped establish within the city government and within some of the county’s high-technology firms will help turn around those low scores on education. Students need to see the link between college and a good job, he says, and to imagine themselves following that path. NOT EVERYONE BELIEVES Not every Kansas official has responded as enthusiastically as Mayor Reardon has. At a 2009 public meeting in Shawnee County (home to the state capitol, Topeka), then County Commissioner Vic Miller dismissed Shawnee’s low health ranking (78 out of 105) as misleading. “Frankly, I can’t imagine what argument you’re going to promote that dropout rates in schools relate to public health,” Miller was quoted as saying in the Topeka Capital-Journal. Willems Van Dijk says that Miller’s skepticism is understandable, but the evidence that socioeconomic factors like education play a major role in health is solid and growing. For example, high school dropouts tend to die earlier than graduates. Further, their children are more likely to be born prematurely, robbing another generation of a healthy start. Every year of additional education improves those outcomes. “Research is now showing that many health effects once attributed to racial differences are actually tied to educational and economic disparities,” she says. WHEN POLITICAL AND HEALTH PRIORITIES COLLIDE No one expects a county’s overall ranking to improve overnight. “Where you are on the curve isn’t as important as which direction you’re moving,” Willems Van Dijk says. Wyandotte County was rated at or near the bottom of Kansas rankings for three years in a row and is likely to be there again when the state’s latest numbers are released this spring. Yet Mayor Reardon is hopeful that the measures he is taking will ultimately shift the course. County planners must now consider the needs of pedestrians and bicyclists as well as drivers when designing road improvements, he notes. And a newly remodeled supermarket has doubled the amount of fresh fruits and vegetables that are available downtown. “There are a lot of polarizing issues in Kansas City,” he says, “but I’ve been pleasantly surprised to see that doing all we can to improve the health of our community isn’t one of them.” That mapmaking visionary of epidemiology, John Snow, would be proud. 

Why Build a Prototype




With any engineering endeavor where innovation is taking place at unprecedented levels, such as robotics, prototyping is an absolute must. Prototyping offers engineering teams the ability to test and understand if a project is feasible both technically and economically, while mitigating the risk associated with building a ready-to-deploy system. Prototypes help one to iterate on a design, using the parts that work while refining those that fall short of applications. Ultimately, the prototype allows you to put your best foot forward when presenting to customers and investors who help determine the level of success at your company.
each of these subsystems. The same can hold true for the software engineers on staff, constantly refining and optimizing code, resulting in slipping deadlines. This process of optimization can often become a giant time sink at the beginning of the project, a time when it is most important to validate whether the project is possible and economically viable. Many projects run out of money and time before anyone ever sees what the engineers have been working on. While cost is an important factor, the goal of the prototype is to create a platform that is within a striking distance of profitability. The robotic team should focus on building a system that clearly demonstrates the value the robot offers. Setting this as your bar of success will help your team showcase your technology to the public before running out of capital. Once customers and investors are interested and supportive, your team can then focus on optimizing the design down to an efficient and profitable system. Reconfigurable I/O Sensors and actuators are what allow a robot to experience and manipulate the world. Unfortunately, at the beginning of the design process, it’s almost impossible to know all the details about the inputs and outputs of the system, including what voltage levels are required, sampling rates, number of channels of input and number of digital lines just to name a few. That being said, incorporating I/O in your prototype is essential in creating a truly functional system. By adding sensory input and control output, engineers prove their design can be implemented in the real world. Creating a paper design, implementing that design in software and even simulating the design in a virtual environment are still largely conceptual exercises. To prove the value of your design to skeptical investors, the prototype needs to receive data and respond accordingly. Additionally, data from prototyping operations helps you refine functional requirements with clients and the rest of the design team based on actual performance. Choosing a prototyping platform that allows engineers to quickly swap out I/O and try new combinations allows your robot to be dynamic and change as the engineers learn more about the problem they’re trying to solve. The robot in Figure 2 is a National Instruments based platform that enables engineers to mix and match I/O depending on the needs of the system. This allows you to quickly prompt a robot to interact with the real world, while still permitting the flexibility to change when necessary. Design for Reuse One aim of the prototype is to be able to move to a subsequent design, either one more optimized and closer to the end product or one that incorporates customer feedback. In either case, the engineering team must decide which components can be used in the next iteration of the design. Extra focus must be given to these components— whether a communication protocol or software algorithm—to ensure that their interfaces and implementations make them as portable as possible in the next phase of development. This involves making sure you have consistent interfaces, decoupling components and maintain a modular design. When choosing tools to prototype your system, it is important to consider whether these tools offer a platform that can enable engineers and scientists to develop the system at the volume required and at a price point that is profitable Demonstrate Your Prototype It should be easy to demonstrate your robotic prototype. This prototype will become your calling card–the first thing that customers, venture capitalists, and potential employees notice. A prototype that is easy to set up and quickly illustrates what differentiates your product is the best way to generate positive buzz around the company and robot. When pitching your idea, show the demo as quickly as possible. An impressive demo can do so much more for your company and product than simple slides on a projector.