How-To Teach Yourself How to Program?

By Kayol R Hope
The web is full of free resources that can turn you into a programmer and if you've always wanted to learn how to build software yourself or perhaps write an occasional script but had no clue where to start than this guide is for you!
If you're interested in becoming a programmer, you can get off to a great start using tons of free web-based tutorials and resources. Since the early days of the internet programmer communities have been using it to discuss software development techniques, publish tutorials, and share code samples for others to learn from and use online.
Choosing a Language
A common issue for beginners is getting hung up on trying to figure out which programming language is best to learn first. There are a lot of opinions out there, but there's no one "best" language. Here's the thing: In the end, language doesn't really matter. Understanding data and control structures and design patterns is what matters. Every programming language, even basic scripting languages will have elements that will make other languages easier to understand.
Many programmers never actually take accredited academic courses and are self-taught in every language throughout their careers. This is achieved by reusing concepts already known and referring to documentation and books to learn its syntax. Therefore, instead of getting stuck on what language to learn first simply, pick the kind of development you want to do, and just get started using the one that comes the easiest to you.
There are several different kinds of software development you can do for various platforms; web development, desktop development, mobile device development, and command line.
Desktop Scripting
The easiest way to try your hand at programming for your Windows or Mac desktop is to start with a scripting or macro program like AutoHotkey (for Windows) or Automator (for Mac). Sure, now advanced coders may disagree that AutoHotkey or AppleScript are not "real" programming which is technically true as these types of tools just do high-level scripting. However, for those new to programming who just want to get their hands dirty, automating actions on their desktop, using these free tools provide essential fundamentals towards "real" programming later on. The lines of when an application comprises of scripting and when it is considered to be programming is often blurred, keep this in mind. Once your code is compiled it is considered to be "real" programming. Most end-users of an application usually don't know and shouldn't care as long as it is designed well and functions in a dynamic and robust way in order to serve its intended purpose.
Web Development
If being bound to specific programming languages and with the look and feel of a particular operating system is not your desire, consider developing your application for the browser instead and distribute it to a wider audience, as a web app.
HTML and CSS: The first thing you need to know to build any web site is Hyper Text Markup Language (HTML) the page markup that makes up web pages and Cascading Style Sheet (CSS) is the style information that controls design appearance of the markup. HTML and CSS are scripting languages that just contain page structure and style information. However, you should be familiar with writing coding by hand before you begin building web applications, because building basic webpages is a prerequisite into developing a dynamic web app.
JavaScript: After mastering development of static web pages with HTML and CSS, learning JavaScript is the next step in programming dynamic web pages in a web browser. JavaScript is what bookmarklets, Greasemonkey user scripts, Chrome Web Apps, and Ajax are made of.
Server-side scripting: Once you're comfortable at making dynamic web pages locally in a web browser, you're probably going to want to put some dynamic server action behind it. To do this you will need to learn a server-side scripting language. For Example, to make a web-based contact form that sends an email somewhere based on what a user entered, a server-side script is required. Scripting languages like, Python, Perl, or Ruby can talk to a database on your web server as well, so if you want to make a site where users can log in and store information, that would be the proper way to go about it.
Web frameworks: Instead of reinventing the wheel for every new web development project, some programmers have come up with development frameworks that do some repetitive work of rewriting similar code over and over to build dynamic web sites. Many scripting languages offer a web-specific structure for getting common web application tasks done easier. Web development frameworks include; Ruby on Rails framework (for Ruby programmers), CakePHP (for PHP programmers), Django (for Python programmers), and jQuery (for JavaScript programmers).
Web APIs: An API (Application Programming Interface) is a programmatic way for different pieces of software to talk to one another. For example, if you want to put a dynamic map on your web site, you would use a Google Map instead of building your own custom map. The Google Maps API makes it easy to programmatically include a map in a page with JavaScript. Almost every modern web service uses an API that lets you include data and widgets from it in your application. These include; Twitter, Facebook, Google Docs, Google Maps, etc. Integrating other web apps into your web application via API's are great resources for enhancing rich web development. Every major web service API should offer thorough documentation and some quick start guide.
Command Line Scripting
If you want to write a program that takes textual or file input and outputs something useful, the command line is ideal. While the command line isn't as visually appealing as a web app or desktop application, development of quick scripts that automate processes, it is the best suited.
Several scripting languages that work on a Linux-based web server also work at the command line including: Perl, Python, and PHP. Learning one of those languages will make you conversant in both contexts. If becoming fluent in Unix is one of your programming goals, you must master shell scripting with bash. Bash is the command line scripting language of a *nix environment, and it can do everything from help you set up automated backups of your database and files to building out a full-fledged application with user interaction.
Modern web apps and browsers are extensible with bits of software that plugin to them and add additional features. Add-on development gains popularity as more existing developers look at existing applications and frameworks and want to add a specific feature to make it better.
With only a mastery of HTML, JavaScript, and CSS you can still do plenty in any web browser. Bookmarklets, Greasemonkey user scripts, and Stylish user styles are created with the same code that makes regular web pages, so they're worth learning even if you just want to tweak an existing site with a small snippet of code.
More advanced browser add-ons, like Firefox and Chrome extensions, let you do more. Developing Firefox and Chrome extensions requires that you're familiar in JavaScript, XML, and JSON which is markup similar to HTML, but with stricter format rules.
Many free web applications offer an extension framework as well such as WordPress and Drupal. Both of which are written in PHP, making that particular language a prerequisite for development.
Desktop Development
Learning web development first is a great Segway into obtaining the needed skills from one context in order to apply to another like desktop application development. Desktop Development programming will vary on the Operating System (OS), use of Software Development Kit (SDK) provided, and desire for cross-platform development. Using previous web development skills can also be re-utilized in distribution of your desktop application across the web to market to a larger audience.
Mobile Device App Development
Mobile applications like the ones found on smartphones and tablets are increasingly popular, and having your app listed on the iTunes App Store, Google Play Store (formerly known as the Android Market Place), Windows Marketplace, BlackBerry World, etc. However, for the majority of beginner coders, delving into mobile development can be a steep learning curve, because it requires a great deal of comfort and familiarity with advanced programming languages like Java and Objective C to develop much more than a basic "Hello World" application.
The Long Road Ahead
Great coders are often meticulous problem-solvers whom are passionate at what they do and fueled by small solitary victories of overcoming issues through trial and error. The path to a career is both a long road of endless learning and frustration but very rewarding and profitable none-the-less.
Kayol Hope has a deserve background and working knowledge specializing in the areas of IT Consulting, Programming, and Web Development. His blog and online community of social networking was established to house and showcase some of the best information technology & programming tutorials and articles around. His published tutorials not only produce great results and interfaces, but explain the techniques behind them in a friendly, approachable manner.
We hope our subscribers will learn a few tricks, techniques, and tips that they might not have seen before and help them maximize their creative potential!

Article Source:

F.C.C. Describes 911 and Cellphone Problems

The Federal Communications Commission also said “a small number” of 911 service centers — the sites that receive emergency calls and link them with first responders — also were out of service after the storm, the second time in recent months that 911 service has suffered weather-related failures. Many emergency calls were rerouted, officials said, to call centers that survived the storm.

“Our assumption is that communication outages could get worse before they get better,” Julius Genachowski, the F.C.C. chairman, told reporters in a conference call Tuesday afternoon. “I want to emphasize that the storm is not over,” he said, referring to both the weather and the facilities.

Verizon Wireless said Wednesday that 6 percent of its cell sites remained down in storm-affected areas, although all of its switching and data centers “are functioning normally.” T-Mobile issued a statement saying that roughly 20 percent of its network in New York City was out of service, as was up to 10 percent of its network in Washington.

AT&T declined to specify the status of its systems on Wednesday. All of the companies said they were working to assess and repair the damaged networks.

Some of the emergency calls that were affected by the storm were rerouted to new 911 service centers without electronic location information, which tells the operator where the call originated. This means public safety officials must rely on callers for details about where an emergency was occurring, Mr. Genachowski said.

F.C.C. officials declined to identify where the affected 911 centers were located, or which phone companies were responsible for servicing them.

Roughly one-quarter of the residents of the 10 states that were affected by the storm also lost cable television and broadband Internet service, killing most or all of the connections that millions of consumers were relying on for information.

Few radio broadcasters were affected by the storm, said David Turetsky, the chief of the F.C.C.’s public safety and homeland security bureau. Three stations received F.C.C. permission to broadcast at higher power levels, and one station relocated its transmissions on the broadcast spectrum because of damage to its radio tower.

The F.C.C. activated its disaster reporting information system during the storm, a voluntary system through which wireless, landline, broadcast, satellite and cable TV companies can report the status of their systems. Based on those reports, and its own on-the-ground assessments, the F.C.C. knows where the problems are and which companies are responsible for addressing them, but officials declined on Tuesday to make that information public.

In its manual for use of the disaster system, the F.C.C. says that the information “is sensitive for national security and/or commercial reasons” and therefore will be treated as “presumptively confidential.”

Similar storm-related 911 failures have been the subject of previous F.C.C. scrutiny. The commission is currently in the middle of a formal inquiry into the causes of widespread failures of 911 networks in June resulting from the derecho, a violent wind and thunderstorm.

“From isolated breakdowns in Ohio, Kentucky, Indiana, and Pennsylvania to systemic failures in Northern Virginia and West Virginia, it appears that a significant number of 911 systems and services were partially or completely down for several days,” the F.C.C. said in statements related to that inquiry.

Roughly one million people in Northern Virginia were affected by 911 failures in June, which primarily occurred in systems managed by Verizon. Company officials said before this week’s storm that they had made a number of improvements to their emergency systems and backups that would help them maintain service during the storm.

The commission collected public comments on the 911 failures over the summer, but it has yet to report its findings.

Like Apple, Google Now Has Devices That Come in Three Sizes

From top: the Nexus 4, Nexus 7 and Nexus 10.From top: the Nexus 4, Nexus 7 and Nexus 10.
With the addition of its new iPad Mini, Apple offers touch-screen devices in three sizes. Now Google is matching that by introducing a tablet that is meant to compete directly with the larger iPad.
Google on Monday unveiled the Nexus 10, a 10-inch tablet it developed with Samsung, and a new phone, the Nexus 4, that it made with LG. Google also said it would upgrade its seven-inch tablet, the Nexus 7, to include a cellular data connection.
Google’s Nexus line of devices shows off Google’s latest mobile software.
“We’re building pretty sensational world-class products here,” said Hugo Barra, director of product management for Android at Google, at a news conference in San Francisco on Monday. “You don’t find anything even remotely like that out there.”
Also on Monday, Microsoft held a press event in San Francisco to talk about the imminent release of Windows Phone 8, its new mobile operating system, which it announced in June.
Google, Apple, Microsoft and are all building devices in part to recruit customers to use their other services and buy apps, music, books and other content from them.
The Nexus 10 tablet includes a high-resolution display and the newest Android software, which has a feature that allows the tablet to be shared by setting up separate user accounts, something the iPad does not have.
Most notably, the 10-inch screen size will allow Google to go after the market that Apple created with the 9.7-inch iPad: people who are buying full-size tablets instead of laptops. The iPad has been Apple’s most quickly adopted product ever, with 100 million tablets sold to date. Clearly, that market is a juicy target for Google, as well as for Amazon, which recently introduced a bigger 8.9-inch tablet.
With the Nexus 10’s starting price of $400, $100 less than the cheapest iPad, Google has a good chance of selling plenty of tablets, said Jan Dawson, a research analyst with Ovum. But Google would still not pose much of a threat to Apple because it has been selling its tablets at cost, Mr. Dawson said. Google’s goal is to build market share and profit from ads and content sales.
“Neither Google nor Samsung can afford to do that for long with the Nexus 10,” he said. “The more they sell, the more money they lose.”
The Nexus 4 phone has a few features that the last Nexus phone did not. Among them are wireless charging by setting the phone on a small charging station, faster processing, an improved screen, typing by moving a finger instead of pressing individual keys and panoramic photo-taking.
Google also had news about Google Play, its store for apps, books, music and videos, which has lagged other online stores because it has not offered as comprehensive a selection.
Its music service finally signed a deal to bring the catalog of the Warner Music Group — with Green Day, Madonna, Neil Young, the Red Hot Chili Peppers and hundreds of other acts — to its Google Play store. This means Google’s millions of Android users will have an essentially complete catalog of MP3s to buy.
Google also recently signed deals with Time Inc. for magazines and 20th Century Fox for movies, filling other major holes in its offerings.
At its event, Microsoft said Windows Phone 8 would appear on new smartphones made by Samsung, Nokia and HTC starting next month. It also talked about some new features, like Data Sense, a tool that allows people to see how much data apps are using, so they can close data-guzzling apps and avoid exceeding their data plans.
Microsoft has spent hundreds of millions of dollars developing and promoting its Windows Phone operating system since releasing it two years ago. But despite some rave reviews from critics, Windows Phone 7, the previous version, has been unpopular among consumers, with only about 2.5 percent of the American market to date.
Nokia, the Finnish phone maker, has staked its future on Windows Phone. It formed a partnership with Microsoft to ship Nokia Windows phones. But sales of its Lumia handsets featuring the software have been slow.
Terry Myerson, Microsoft’s corporate vice president for Windows Phone, said in an interview that he felt it was the right moment for the software, because it was getting strong support from manufacturers and carriers, and was coming out at the same time as Windows 8, Microsoft’s new desktop and tablet operating system. The architecture of Windows Phone 8 has been rewritten to share the core software in Windows 8, and many features will work between the operating systems, he said.
Ben Sisario and Claire Cain Miller contributed reporting.

on page B2 of the NewYork edition with the headline: Google Unveils a Larger Tablet as an iPad Competitor.

Yes, Driverless Cars Know the Way to San Jose

THE “look Ma, no hands” moment came at about 60 miles an hour on Highway 101.

Brian Torcellini, Google’s driving program manager, had driven the white Lexus RX 450h out of the parking lot at one of the company’s research buildings and along local streets to the freeway, a main artery through Silicon Valley. But shortly after clearing the on-ramp and accelerating to the pace of traffic, he pushed a yellow button on the modified console between the front seats. A loud electronic chime came from the car’s speakers, followed by a synthesized female voice.

“Autodriving,” it announced breathlessly.

Mr. Torcellini took his hands off the steering wheel, lifted his foot from the accelerator, and the Lexus hybrid drove itself, following the curves of the freeway, speeding up to get out of another car’s blind spot, moving over slightly to stay well clear of a truck in the next lane, slowing when a car cut in front.

“We adjusted our speed to give him a little room,” said Anthony Levandowski, one of the lead engineers for Google’s self-driving-car project, who was monitoring the system on a laptop from the passenger seat. “Just like a person would.”

Since the project was first widely publicized more than two years ago, Google has been seen as being at the forefront of efforts to free humans from situations when driving is drudgery. In all, the company’s driverless cars — earlier-generation Toyota Priuses and the newer Lexuses, recognizable by their spinning, roof-mounted laser range finders — have logged about 300,000 miles on all kinds of roads. (Mr. Torcellini unofficially leads the pack, with roughly 30,000 miles behind the wheel — but not turning it.)

But the company is far from alone in its quest for a car that will drive just like a person would, or actually better. Most major automobile manufacturers are working on self-driving systems in one form or another.

Google says it does not want to make cars, but instead work with suppliers and automakers to bring its technology to the marketplace. The company sees the project as an outgrowth of its core work in software and data management, and talks about reimagining people’s relationship with their automobiles.

Self-driving cars, Mr. Levandowski said, will give people “the ability to move through space without necessarily wasting your time.”

Driving cars, he added, “is the most important thing that computers are going to do in the next 10 years.”

For the automakers, on the other hand, self-driving is more about evolution than revolution — about building incrementally upon existing features like smart cruise control and parking assist to make cars that are safer and easier to drive, although the driver is still in control. Full autonomy may be the eventual goal, but the first aim is to make cars more desirable to customers.

“We have this technology,” said Marcial Hernandez, principal engineer at the Volkswagen Group’s Electronics Research Laboratory, up the road in Belmont, Calif. “How do we turn it into a product that can be advertised to a customer, that will have some benefit to a customer?”

With all the research efforts, there is a growing consensus among transportation experts that self-driving cars are coming, sooner than later, and that the potential benefits — in crashes, deaths and injuries avoided, and in roads used more efficiently, to name a few — are enormous. Already, Florida, Nevada and California have made self-driving cars legal for testing purposes, giving each car, in effect, its own driver’s license.

Richard Wallace, director for transportation systems analysis at the Center for Automotive Research, a nonprofit group that recently released a report on self-driving cars with the consulting firm KPMG, said that probably by the end of the decade, “we would be able to have a safe, hands-free left-lane commute.” In 15 to 20 years, he said, “literally from the driveway to destination starts to become possible.”

Most of the sensors are already in widespread use. Radar, for example, is used for features like adaptive cruise control, measuring the distance to the car ahead so that a safe interval can be maintained. Cameras are used in lane-keeping systems, recognizing lane stripes on the road so the car can be steered between them.

Digital encoders, specialized sensors that precisely measure wheel rotation, have been employed for years in antilock brakes and stability-control systems. Accelerometers have been used to measure changes in speed, particularly for air bags.

GPS devices are useful for self-driving systems, but only in giving a general sense of the car’s location. More important is knowing the car’s position in respect to other vehicles and objects in its immediate environment — information the other sensors provide.

“You use the sensors in the vehicle to very precisely place you locally,” Mr. Hernandez said.

In the move toward more autonomous vehicles, one tendency is to integrate the data from different sensors. Camera recognition systems may be fooled by shadows, for example, thinking they are objects, but radar is not readily tricked.

Some automakers are developing a feature known as traffic jam assist, which combines the information from radar and cameras to allow hands-off driving on the highway at speeds of about 30 m.p.h. or less.

“We’re taking the adaptive cruise control and the lane-keeping and bringing them together,” Mr. Hernandez said.

Traffic jam assist is a step toward more autonomy, but the car is still far from self-driving; it won’t change lanes, for example.

“A lot of this is getting people comfortable with the technology, showing people a benefit,” Mr. Hernandez said. “The idea is the driver is always in control — the vehicle is there to help you.”

Google’s fleet of about a dozen vehicles adds the rooftop laser units to gather a more useful data stream than the cameras and radar systems alone can do. Laser range finders, known as lidar units, have been used by some automakers to provide distance measurements for their adaptive cruise control systems.

But Google’s lidar is far more complex, consisting of 64 infrared lasers that spin inside a housing atop the car to take measurements in all horizontal directions. (Lidar systems like this are also very expensive — about $70,000 a unit — so cost and complexity will have to come down before they can be widely used.)

The units take so many measurements that, when combined with information from the radar and cameras, a moving map of the car’s surroundings can be created in the onboard computer, a fairly run-of-the-mill desktop. It’s a highly detailed map — the lidar can distinguish, for example, a pickup truck carrying something on a rack from a similarly sized, but boxier, delivery van.

“We like lidar because it is actually the most rich sensor you can put on a car,” Mr. Levandowski of Google said. “It helps you separate out people from bushes behind them, people from each other, people from crosswalks, and it helps you make a 3-D model of the world.”

Still, the key to a car being able to truly drive itself lies in the software. “The piece that’s missing is not better radars or cameras or lasers or whatever we’re using,” he said. “It’s really the intelligence behind them.”

Google’s engineers tweak that intelligence based on the driving experience of the test cars. Safely coping with four-way-stop intersections was really difficult, Mr. Levandowski said, because a certain amount of assertiveness — moving into the intersection slightly to see how other cars react — is required.

Fundamentally, though, the car has to operate safely, Mr. Levandowski said, so if another car tries to enter the intersection out of turn, the self-driving car will yield.

The learning is constant. On the way back from the Highway 101 drive, for instance, an extra-long articulated bus turned in front of the Lexus, which was now back in human-driving mode because the software had been optimized for only highway driving that day. But all the sensors were still doing their jobs, so the bus showed up on Mr. Levandowski’s laptop screen as a string of red dots that stretched out as the bus rounded the corner.

“Awesome bus,” Mr. Levandowski said as he typed a note for other engineers to take a look.

The system constantly compares the car’s map to detailed maps created by Google and downloaded to the car. Those maps provide a lot of additional information that helps with navigation, but they also help the car know when conditions have changed.

Perhaps construction barrels have just been set up, closing a lane, or a mattress or other object has fallen onto the road from a car. By comparing maps, the car knows its surroundings have changed, and it has to take some action: continue driving, alert the driver that it’s time to take back control or, if all else fails, pull over to the side of the road.

The communication is two-way, so in addition to downloading Google’s maps, the car can upload its map to Google. If several self-driving cars upload maps showing the new construction barrels, for example, Google can update the map it sends to other cars, letting those cars anticipate the hazard.

This connectivity is critical to Google’s approach, and is one reason its system is more advanced than other efforts. (For current and planned features like adaptive cruise control, car companies have not needed to consider communication, but as they move toward more fully autonomous vehicles they will have to, experts say.)

But even Google acknowledges that its system is not there yet.

“We think it is going to be feasible for a computer to drive a car safer than a person can in the not-too-distant future,” Mr. Levandowski said. “By no means are we there today. We are in the process of learning.”

If and when it is introduced, there will no doubt be limits. “What’s nice about these cars is you can actually confine where they operate and how they work because they know where they are,” Mr. Levandowski said.

So the system may work at first only on some highways, or in other specific situations.

“It’s not going to be George Jetson from day one,” he said.

on page AU1 of the New York edition with the headline: Yes, Driverless Cars Know the Way to San Jose.

Killing the Computer to Save It


One of those is Peter G. Neumann, now an 80-year-old computer scientist at SRI International, a pioneering engineering research laboratory here.

As an applied-mathematics student atHarvard, Dr. Neumann had a two-hour breakfast with Einstein on Nov. 8, 1952. What the young math student took away was a deeply held philosophy of design that has remained with him for six decades and has been his governing principle of computing and computer security.

For many of those years, Dr. Neumann (pronounced NOY-man) has remained a voice in the wilderness, tirelessly pointing out that the computer industry has a penchant for repeating the mistakes of the past. He has long been one of the nation’s leading specialists in computer security, and early on he predicted that the security flaws that have accompanied the pell-mell explosion of the computer and Internet industries would have disastrous consequences.

“His biggest contribution is to stress the ‘systems’ nature of the security and reliability problems,” said Steven M. Bellovin, chief technology officer of the Federal Trade Commission. “That is, trouble occurs not because of one failure, but because of the way many different pieces interact.”

Dr. Bellovin said that it was Dr. Neumann who originally gave him the insight that “complex systems break in complex ways” — that the increasing complexity of modern hardware and software has made it virtually impossible to identify the flaws and vulnerabilities in computer systems and ensure that they are secure and trustworthy.

The consequence has come to pass in the form of an epidemic of computer malware and rising concerns about cyberwarfare as a threat to global security, voiced alarmingly this month by the defense secretary, Leon E. Panetta, who warned of a possible “cyber-Pearl Harbor” attack on the United States.

It is remarkable, then, that years after most of his contemporaries have retired, Dr. Neumann is still at it and has seized the opportunity to start over and redesign computers and software from a “clean slate.”

He is leading a team of researchers in an effort to completely rethink how to make computers and networks secure, in a five-year project financed by the Pentagon’s Defense Advanced Research Projects Agency, or Darpa, with Robert N. Watson, a computer security researcher at Cambridge University’s Computer Laboratory.

“I’ve been tilting at the same windmills for basically 40 years,” said Dr. Neumann recently during a lunchtime interview at a Chinese restaurant near his art-filled home in Palo Alto, Calif. “And I get the impression that most of the folks who are responsible don’t want to hear about complexity. They are interested in quick and dirty solutions.”

An Early Voice for Security

Dr. Neumann, who left Bell Labs and moved to California as a single father with three young children in 1970, has occupied the same office at SRI for four decades. Until the building was recently modified to make it earthquake-resistant, the office had attained notoriety for the towering stacks of computer science literature that filled every cranny. Legend has it that colleagues who visited the office after the 1989 earthquake were stunned to discover that while other offices were in disarray from the 7.1-magnitude quake, nothing in Dr. Neumann’s office appeared to have been disturbed.

A trim and agile man, with piercing eyes and a salt-and-pepper beard, Dr. Neumann has practiced tai chi for decades. But his passion, besides computer security, is music. He plays a variety of instruments, including bassoon, French horn, trombone and piano, and is active in a variety of musical groups. At computer security conferences it has become a tradition for Dr. Neumann to lead his colleagues in song, playing tunes from Gilbert and Sullivan and Tom Lehrer.

Until recently, security was a backwater in the world of computing. Today it is a multibillion-dollar industry, though one of dubious competence, and safeguarding the nation’s computerized critical infrastructure has taken on added urgency. President Obama cited it in the third debate of the presidential campaign, focusing on foreign policy, as something “we need to be thinking about” as part of the nation’s military strategy.

Richard A. Clarke, the nation’s former counterterrorism czar and an author of “Cyber War: The Next Threat to National Security and What to Do About It” (Ecco/HarperCollins, 2010), agrees that Dr. Neumann’s Clean Slate effort, as it is called, is essential.

“Fundamentally all of the stuff we’re doing to secure networks today is putting bandages on and putting our fingers in the dike, and the dike springs a leak somewhere else,” Mr. Clarke said.

“We have not fundamentally redesigned our networks for 45 years,” he said. “Sure, it would cost an enormous amount to rearchitect, but let’s start it and see if it works better and let the marketplace decide.”

Dr. Neumann is one of the most qualified people to lead such an effort to rethink security. He has been there for the entire trajectory of modern computing — even before its earliest days. He took his first computing job in the summer of 1953, when he was hired to work as a programmer employing an I.B.M. card-punched calculator.

Today the SRI-Cambridge collaboration is one of several dozen research projects financed by Darpa’s Information Innovation Office as part of a “cyber resilience” effort started in 2010.

Run by Dr. Howard Shrobe, an M.I.T. computer scientist who is now a Darpa program manager, the effort began with a premise: If the computer industry got a do-over, what should it do differently?

The program includes two separate but related efforts: Crash, for Clean-Slate Design of Resilient Adaptive Secure Hosts; and MRC, for Mission-Oriented Resilient Clouds. The idea is to reconsider computing entirely, from the silicon wafers on which circuits are etched to the application programs run by users, as well as services that are placing more private and personal data in remote data centers.

Clean Slate is financing research to explore how to design computer systems that are less vulnerable to computer intruders and recover more readily once security is breached.

Dr. Shrobe argues that because the industry is now in a fundamental transition from desktop to mobile systems, it is a good time to completely rethink computing. But among the biggest challenges is the monoculture of the computer “ecosystem” of desktop, servers and networks, he said.

“Nature abhors monocultures, and that’s exactly what we have in the computer world today,” said Dr. Shrobe. “Eighty percent are running the same operating system.”

Lessons From Biology

To combat uniformity in software, designers are now pursuing a variety of approaches that make computer system resources moving targets. Already some computer operating systems scramble internal addresses much the way a magician might perform the trick of hiding a pea in a shell. The Clean Slate project is taking that idea further, essentially creating software that constantly shape-shifts to elude would-be attackers.

That the Internet enables almost any computer in the world to connect directly to any other makes it possible for an attacker who identifies a single vulnerability to almost instantly compromise a vast number of systems.

But borrowing from another science, Dr. Neumann notes that biological systems have multiple immune systems — not only are there initial barriers, but a second system consisting of sentinels like T cells has the ability to detect and eliminate intruders and then remember them to provide protection in the future.

In contrast, today’s computer and network systems were largely designed with security as an afterthought, if at all.

One design approach that Dr. Neumann’s research team is pursuing is known as a tagged architecture. In effect, each piece of data in the experimental system must carry “credentials” — an encryption code that ensures that it is one that the system trusts. If the data or program’s papers are not in order, the computer won’t process them.

For Dr. Neumann, one of the most frustrating parts of the process is seeing problems that were solved technically as long ago as four decades still plague the computer world.

A classic example is “buffer overflow” vulnerability, a design flaw that permits an attacker to send a file with a long string of characters that will overrun an area of a computer’s memory, causing the program to fail and make it possible for the intruder to execute a malicious program.

Almost 25 years ago, Robert Tappan Morris, then a graduate student at Cornell University, used the technique to make his worm program spread throughout an Internet that was then composed of about only 50,000 computers.

Dr. Neumann had attended Harvard with Robert Morris, Robert Tappan Morris’s father, and then worked with him at Bell Laboratories in the 1960s and 1970s, where the elder Mr. Morris was one of the inventors of the Unix operating system. Dr. Neumann, a close family friend, was prepared to testify at the trial of the young programmer, who carried out his hacking stunt with no real malicious intent. He was convicted and fined, and is now a professor at M.I.T.

At the time that the Morris Worm had run amok on the Internet, the buffer overflow flaw had already been known about and controlled in the Multics operating system research project, which Dr. Neumann helped lead from 1965 to 1969.

An early Pentagon-financed design effort, Multics was the first systematic attempt to grapple with how to secure computer resources that are shared by many users. Yet many of the Multics innovations were ignored at the time because I.B.M. mainframes were quickly coming to dominate the industry.

Hope and Worry

The experience left Dr. Neumann — who had coined the term “Unics” to describe a programming effort by Ken Thompson that would lead to the modern Unix operating system — simultaneously pessimistic and optimistic about the industry’s future.

“I’m fundamentally an optimist with regard to what we can do with research,” he said. “I’m fundamentally a pessimist with respect to what corporations who are fundamentally beholden to their stockholders do, because they’re always working on short-term appearance.”

That dichotomy can be seen in the Association of Computing Machinery Risks Forumnewsgroup, a collection of e-mails reporting computer failures and foibles that Dr. Neumann has edited since 1985. With hundreds of thousands, and possibly millions, of followers, it is one of the most widely read mailing lists on the Internet — an evolving compendium of computer failures, flaws and privacy issues that he has maintained and annotated with wry comments and the occasional pun. In 1995 the list became the basis for his book “Computer-Related Risks” (Addison-Wesley/ACM Press).

While the Risks list is a reflection of Dr. Neumann’s personality, it also displays his longtime interest in electronic privacy. He is deeply involved in the technology issues surrounding electronic voting — he likes to quote Stalin on the risks:, “It’s not who votes that counts, it’s who counts the votes” — and has testified, served on panels and written widely on the subject.

Dr. Neumann grew up in New York City, in Greenwich Village, but his family moved to Rye, N.Y., where he attended high school. J. B. Neumann, Dr. Neumann’s father, was a noted art dealer, first in Germany and then in New York, where he opened the New Art Circle gallery after moving to the United States in 1923. Dr. Neumann recalls his father’s tale of eating in a restaurant in Munich, where he had a gallery, and finding that he was seated next to Hitler and some of his Nazi associates. He left the country for the United States soon afterward.

His mother, Elsa Schmid Neumann, was an artist. His two-hour breakfast with Einstein took place because she had been commissioned to create a colorful mosaic of Einstein and had become friendly with him. The mosaic is now displayed in a reference reading room in the main library at Boston University.

Dr. Neumann’s college conversation was the start of a lifelong romance with both the beauty and the perils of complexity, something that Einstein hinted at during their breakfast.

“What do you think of Johannes Brahms?” Dr. Neumann asked the physicist.

“I have never understood Brahms,” Einstein replied. “I believe Brahms was burning the midnight oil trying to be complicated.”

Honest Advice Where to Buy Kindle Fire

Having your own Amazon Kindle Fire can provide you with a number of benefits, especially if you are a ebook lover. With the great features the device has to offer, it is definitely a must have device that allows you to read ebooks along with other amazing features. In case you are planning to purchase one, you might wonder where to buy Kindle Fire. You need to find the best deals available either in the local stores or on online stores.
If you crossed paths with a great deal, it can make a big difference. Knowing that they are popular and sought after devices at the present, you no longer need to worry where to buy this amazing and high-tech gadget. You have to take note that there are many retailers that offer affordable personal gadget that works the same as well as online stores, thus you can easily purchase one.
In case you decided to purchase one online, you will surely enjoy the advantage of being able to compare it with others. The price of this latest technology model runs about $199 if it is not on sale, thus it is still quite affordable for majority of consumers out there.
If you were able to find a lower priced device, it can help save a lot. In case you are checking online, you should consider all the retailers that offer this kind of gadget. Of course, you also need to bear in mind that finding a good deal does not mean that the device will work in a different way, thus it is best to look for great deals when you have a plan to buy the product.
When checking the online stores, Amazon is worth checking. It is the first place that you should check out. They have popular deals where you can get a few dollars deducted from the actual price in case you are going to use a certain kind of credit card to buy the device. Amazon also offers a marketplace of private sellers who are offering their used and new devices for sale.
It is also best to make use of a shopping comparison website in order to find the best deal on the Kindle Fire. You simply enter in the name of the device and all websites that are selling it will appear. If you find one that offers the lowest price on the Kindle Fire, including the shipping and tax, you should look for any available coupon codes. You will surely end up with a big discount on your Kindle Fire. This is one of the best ways to get a good discount. If you are wondering where to buy Kindle Fire, a good comparison website is the key.
You should also check out auction websites since they might also offer great deals on the Kindle Fire. If you want a new one, you can find brand new or slightly used Kindle Fire models. Since most of them are priced low, you might have to spend some time bidding on them. The bidding might get competitive, but it is worth the cost as well as saving a lot at the same time.

Alienware X51 Review - A Customizable Gaming Desktop With an Impressive Appearance

The Alienware X51 is a durable desktop offered by Dell. It's ideal for the console gamer who wants a bit more out of their PC gaming experience. You can expect a great deal of performance and speed from this desktop. In addition to playing games, you can also create content and manage digital media.
The moment you take it out of its box you will be impressed by the chassis design. The sleek black matte finish with chrome accents and customizable lighting areas create a unique appearance. The dual-orientation design allows for you to set the unit up in either a vertical or horizontal position. In short, it's designed much like a gaming console.
The Alienware X51 is powered by a second-generation Intel Core i5 processor (quad-core) with 3 GHz and 8 GB of SDRAM RAM (DD3). The superior hard ware is complemented by a NVIDIA GeForce G5 graphics unit, featuring 1 GB of DDR5 video memory. You will be able to play your favorite games just as they were meant to be played - smoothly and efficiently. You won't have to worry about any lagging.
Like other systems in the Alienware line, the X51 comes with the special feature Alienware Command Center, allowing users to quickly access Alien FX lighting effects, applications, and the power management system. The feature AlienAdrenaline is completely new for the Alienware X51. It enables users to create their own unique profiles, from which they can launch a series of events when activated. For instance, you can modify your profile setting to tell the system to shut down unnecessary programs whenever you need extra performance.
The lighting controls allow you to choose from a wide array of color and transition effects, and assign them to separate zones, including the touchpad and keyboard. In other words, YOU get to help choose the overall appearance of your desktop!
AlienFusion Power Management is set up to allow you to decide when you want to scale back power or optimize the power for maximum performance. You can save energy by using less power when using less intensive programs and applications.
There is plenty of room for connectivity and expansion, thanks to 4 USB 2.0 ports, super-seed USB 3.0 ports, 1 HDMI output, surround sound speaker outputs, etc. Thanks to Wireless-N Wi-Fi, which is compatible with any Wireless-N router, you can keep the computer connected to your home network.
Other features and specs include:
· Blu-Ray Disc combo drive
· 7.1-channel surround sound
· 1 TB SATA hard drive
· Turbo boost technology
· DirectX 11 (with the NVIDIA graphics)
· 2 SPDIF digital outputs
If you are in the market for a gaming desktop, then the Alienware X51 is an excellent choice, with its neat appearance and ability to handle demanding graphics. It's also affordable compared to some other gaming desktops out there - especially if you take advantage of Dell computer coupons.
When it comes to online promo offers, nobody offers computer discounts like Dell. You will find some great Alienware X51 discounts to help you get the best deal possible on your gaming PC. The potential to save a lot of money is right in front of you - you just need to take advantage of it.

Things You Need To Know About The New Apple iPhone 5

Things You Need To Know About The New Apple iPhone 5

Expert Author Jeemar Mel Pepino Vilan
The new Apple iPhone 5 has long been waited by millions of customers worldwide and also the iPhone 5 release date is September 21st this year. This amazing phone is entirely upgraded from its screen size to the motherboard of the phone. It truly is so much more than its predecessor which is the white iPhone 4 or the black one. Basically, the overall design is tweaked making it 18% thinner, 20% lighter, and has 12% less volume among all the iPhones.
For the phone's display, it is now relatively brighter and vibrant. This is because of the 4-inch Retina display. So the applications you use and the games you play in your Apple iPhone 5 will have an incredible hue and it is now more realistic that ever before. Moreover, HD viewing has never been this good with this phone.
If you are more on performance basis, then this phone will definitely satisfy your needs. It is powered with an amazingly impressive A6 Chip. Once you use iPhone 5 Apple, you will immediately notice that the efficiency is increased. The performance is doubled in comparison to the A5 chip. You will never get your hands off this phone.
During the iPhone 5 conference, they discussed those particular features and also the companies that would sell them. In particular, the Verizon is among the many telephone companies who promoted iPhones. Verizon wireless iPhone 5 is great from those individuals who are on the go. You must get a hold from this reputable Company due to the fact they offer the fastest 4G network in the United States. The iPhone 5 for Verizon is properly position wherein you are insured for its maximum efficiency.
So if you are planning to purchase this product, iPhone 5 price starts from $199 for the 16GB, $299 for the 32GB and lastly is $399 for the 64GB. This is a breakthrough for Apple since they are now offering high storage which is really great for clients. You will never run out of memory, I assure you that.
The mentioned specifications above are just a preview. I believe that it is the best phone every made yet and sooner or later they will release a new OS for your phone, which is also beneficial for those phone geeks out there. You can also check for iPhone 5 review on the internet as a way to add more information in your part. Last but not the least its technological innovation will surely change the lifestyle of every individual, making them more productive in a unique manner. What are you waiting for? Get yours now!
I'm Jeemar a freelance writer. In case you have topics that need to be written, I'll be here to get in touch.

Android Smartphone: How to Choose the Best?

Many Smartphones have been introduced in the technology market in the recent time. It is because of changing technology and the software that makes it difficult for users to make a choice. If you know the criteria for choosing the best available option in Android Smartphone then you will be easily able to pick one for yourself. Let us find out the criteria that you need to know for choosing the best Android Smartphone.
First thing that you need to consider are the specs. Let us scrutinize these one by one and find out.
1. OS or Operating System
Before you decide on choosing the latest available version, you first need to know that Android keeps getting updates. Therefore, if you own an Android that is not on par with the latest software version available then you need not worry. You can upgrade your device and receive the latest Android version. The versions are available online for downloads, you can easily access these through your device or login to obtain the latest version. Latest software version has many features, free offerings and is compatible with the new Smartphones. With new OS (Operating System), you will be able to download new wallpapers, get a faster access, have a new menu and other such added features.
However, you must remember one thing that not every Android is designed to receive updates. Often there are updates available for specific OS only while others are rendered obsolete. For example, you already have Gingerbread; you will be tied to it until you get an update for this particular version. Remember that software upgrade is not something guaranteed. It is better to opt for an Android version that is capable of receiving new updates and the recent version should be kept in mind while making a purchase.
2. Updates
Being keen on making software updates is understandable but what if you are not able to deploy the same on your device? One essential thing is that you should go for the latest device available. Choose the latest handset available. You might come across various Xperia series or Galaxy series and so on with different brands. A latest available device is capable of accepting software upgrades. You need to keep an eye on the latest available version to know if the said manufacturer is keen on releasing another software upgrade for that specific device.
When choosing an Android Smartphone, going for the newest version of OS is crucial and if you cannot find one with Jelly Bean, settle for Ice Cream Sandwich but check to see if the manufacturer has promised, has released or is working on releasing a Jelly Bean update. The most trustworthy manufacturers when it comes to device upgrades are HTC, Motorola, Samsung and Google. HTC has the most impressive Android upgrades of its devices but Motorola (run by Android) has a reputation of fast upgrades. Note that manufacturer updates often come with newer widgets, tools and fixes that are aimed at improving the overall use of the device.
Another thing essential to notice is that you need to have a ROM that capable of improved performance and installing apps or additional tools. If you are looking for upgrading it then you first need to ensure that customized ROM are available for that specific device. For example, Motorola phones have customized ROM.
3. User Interface or UI
Each Android powered device consists of a different UI. The reason is that with each manufacturer the UI changes too. UI is different with different brands and the features too vary from one manufacturer to another. If in any case you are not satisfied with the UI of your Smartphone then you can download the online third party software like, Zeam, ADW, Go or Launcher Pro.
4. Processor
Processor is the most essential component of your Smartphone. Consider this; if you are more of a gaming enthusiast and are hooked on to 3D games then you need to have a Smartphone with high power CPU processing. If you use your Smartphone for carrying on moderate activities then you can do with a dual or a single core 1GHZ. A Smartphone powered with quad core processor makes sense to those who indulge in heavy gaming and video streaming activities throughout the day.
Some other facts:
• Choose a Smartphone that has a good internal memory like 4-8 GB if you like to download apps
• If you are interested in streaming videos and HD content then a phone with faster processor should be chosen like a quad core processor
• For video calling facility you need to find a Smartphone with front facing camera in addition to rear camera
• The phone should be backed up with a high capacity mAh battery so that it may last longer
Keeping all the above given tips in mind will help you choose the best-suited Android powered phone.
Marigold Henry Montana is a technical expert who deals with various computer related issues. She is capable of handling the software as well as hardware troubleshooting. She advises people on basic upkeep and maintenance of their PC, Laptops, Notebooks and Tablets. She is working as a freelance technology writer and as fulltime online tech support for a well known IT software company. Her aim is to spread the technical know-how to the people all around the globe.
Article Source:

A Web of Answers and Questions

IT starts with a lowering of our shoulders. You and I have just befriended each other, and now we are well into our first cocktails on our first-ever get-together. We’ve bonded over a mutual appreciation of Roald Dahl, and now you’ve endeared yourself further with your comment that the name Real Simple sounds like a manual for people with learning disabilities.
David Flaherty

When we hit our first lull in the conversation, I try to bridge it by asking you about the two years you lived in Boulder, Colo.

“How did you know I lived in Boulder?” you ask, darty-eyed.

“I Googled you last night. I’m sorry.”

“No, no. I’m, uh?... I’m flattered?”

You are? Which is what I was hoping for? But suddenly the tiniest shred of doubt is implied by all the tonal upticks.

“It’s perfectly natural and almost always appropriate,” said Kate Fox, a social anthropologist, about the practice of Googling social or business contacts before getting together with them.

“Obviously, one is always going to have to be discreet when talking about what you’ve found,” said Ms. Fox, a director of the Social Issues Research Center in Oxford, England. “But our brains haven’t changed since the Stone Age, and humans are designed to live in small groups in which everyone knows one another. Googling is an attempt to recreate a primeval, preindustrial pattern of interaction.”

But by the same token, doesn’t taking this shortcut to a primeval, preindustrial pattern of recognition sometimes rob encounters of their inherent mystery? The song is called “Getting to Know You,” not “I’ve Already Researched You.” Sometimes it’s better not to pore over the dossier handed to us, even if it comes from a natural blonde with the State Department in a sweater set and pearls.

Worse, sometimes our online research lands us in thickets. Tina Jordan, an executive in book publishing who has the same name as a former girlfriend of Hugh Hefner, said, “I typically tell any blind dates before I meet them that they probably shouldn’t Google my name, otherwise they’ll be sorely disappointed when they meet me.”

Masami Takahashi, an associate professor of psychology at Northeastern Illinois University, used to use Japanese characters for his name whenever he delivered papers at academic conferences in Japan, until a colleague who had Googled him pointed out that Mr. Takahashi shared the same name in Japanese as a pornographic-film star. Mr. Takahashi said, “Since then, I use only the English alphabet for my name.”

Indeed, to Google is often to create expectation. A friend of Dean Olsher, a public-radio host and a musician, wanted to set him up on a date with one of Mark Morris’s dancers this year. Mr. Olsher promptly went online and started swooning over a gorgeous portrait of the dancer by Annie Leibovitz. But on the evening that Mr. Olsher and his friend trooped to Brooklyn to see her perform and to meet her, Mr. Olsher’s friend became distracted and never engineered the fix-up.

The disappointed Mr. Olsher said: “I don’t regret Googling her at all. I’m so baffled by this idea that we’re not supposed to Google people. Why would there be a line? Like everyone else is allowed to know something but I’m not?”

In business, the line described by Mr. Olsher barely exists, if at all, because Googling is expected. Job applicants who reveal their ignorance of the doings or leadership of the company they are interviewing with can expect to meet with no enthusiasm. “I always Google my prospective clients,” said Janet Montano, a real estate agent in Tampa, Fla. “The mug shots come right up on the top. ‘Not going to get in my car!’ ”

But Ms. Montano said she would never tell a potential homebuyer that she had Googled him. “It’s not very polite,” she said. “I don’t go there.” In one instance, she said, the search worked in a homebuyer’s favor. “It was someone who I probably wasn’t going to work with,” she said. “But then I checked him out and saw who he was.” When she learned he was a popular radio disc jockey, she realized he was a qualified buyer.

Ms. Jordan said: “On a professional level, it seems prudent to optimize one’s knowledge about a person, as long as you don’t make them feel like you’re a cyberstalker. On a personal level, though, it could be loaded. Sometimes best to let sleeping Google-surfing lie.”

Indeed, those of us prone to researching our new friends and acquaintances might profit from the realization that very little, if any, of what we hounds dig up in the garden needs to be presented to our masters. The devil, after all, is in the details. If we tell a new friend that we’ve read her LinkedIn entry or her wedding announcement, it probably won’t be perceived as trespassing, as long we bear no ulterior motives. If we happen to reveal that we’ve read her long-ago abandoned blog about her cat, we’re more likely to be seen as chronically bored than menacing.

But if we let on that we know how much she paid for her home, or who she made campaign contributions to, suddenly her ears might prick up.

These small bouts of alarm are only natural, according to Ms. Fox, the social anthropologist. “We’re getting back to life in a village,” she said. “It’s as if you’d returned to a small village and you started learning things about your neighbors while buying a pint of milk. It would feel uncomfortable at first. But at the back of your brain, it wouldn’t. It’s how we’re wired.”

Nevertheless, you can hire companies now to alter what comes up when people Google you, a fact that speaks to the public’s anxiety about the valance accorded search results. Under pressure from big media companies eager to combat online privacy, Google recently agreed to alter its search algorithms to favor Web sites that offer legitimate copyrighted movies, television and music; is a nonbusiness version of this advent in our future?

In an ideal world, we would all use Google to be better friends by having better recall. There’s nothing more flattering than the person who can summon from the depths of time your mother’s name or your wedding toast; you’ll warm your niece’s heart when you appear to have “remembered” her yearlong stint working at Macy’s.

Some of us have even been known to operate as unsolicited Google elves: earlier this year, hours before having dinner with a group of writers and editors, I found myself e-mailing two of the editors to remind them that their publication had printed one of the writers’ accounts of having recently lost her husband.

Consider the case of Joe Cramer, an auto detailer in Wyoming, Mich. He contracted carbon monoxide poisoning from an industrial accident in 1978, and for two years lost his memory and his ability to empathize. “I had to be guided like a little child,” Mr. Cramer said. “We didn’t have Google then.” His wife sat him on the couch and showed him pictures of family and friends, explaining who each was. His sister-in-law stood next to him at his shop, whispering prompts and reminders into his ear.

When his memory and empathy returned two years later, “I was inundated with waves and waves and waves of guilt,” he said. “The sadness of not knowing what result I’d get from responses from people was devastating. I lost a couple friends because of my inability to remember stuff or to get into the feelings of various situations.”

Mr. Cramer added: “I use Google constantly now. Oh, heavens, it would have been so much easier for me if I’d had it back then. I wouldn’t have been such a lost soul.”

Data-Gathering via Apps Presents a Gray Legal Area

BERLIN — Angry Birds, the top-selling paid mobile app for theiPhone in the United States and Europe, has been downloaded more than a billion times by devoted game players around the world, who often spend hours slinging squawking fowl at groups of egg-stealing pigs.
While regular players are familiar with the particular destructive qualities of certain of these birds, many are unaware of one facet: The game possesses a ravenous ability to collect personal information on its users.
When Jason Hong, an associate professor at the Human-Computer Interaction Institute at Carnegie Mellon University, surveyed 40 users, all but two were unaware that the game was storing their locations so that they could later be the targets of ads.
“When I am giving a talk about this, some people will pull out their smartphones while I am still speaking and erase the game,” Mr. Hong, an expert in mobile application privacy, said during an interview. “Generally, most people are simply unaware of what is going on.”
What is going on, according to experts, is that applications like Angry Birds and even more innocuous-seeming software, like that which turns your phone into a flashlight, defines words or delivers Bible quotes, are also collecting personal information, usually the user’s location and sex and the unique identification number of a smartphone. But in some cases, they cull information from contact lists and pictures from photo libraries.
As the Internet goes mobile, privacy issues surrounding phone apps have moved to the front lines of the debate over what information can be collected, when and by whom. Next year, more people around the world will gain access to the Internet through mobile phones or tablet computers than from desktop PCs, according to Gartner, the research group.
The shift has brought consumers into a gray legal area, where existing privacy protections have failed to keep up with technology. The move to mobile has set off a debate between privacy advocates and online businesses, which consider the accumulation of personal information the backbone of an ad-driven Internet.
In the United States, the data collection practices of app makers are loosely regulated, if at all; some do not even disclose what kind of data they are collecting and why. Last February, the California attorney general, Kamala D. Harris, reached an agreement with six leading operators of mobile application platforms that they would sell or distribute only mobile apps with privacy policies that consumers could review before downloading.
In announcing the voluntary pact with Amazon, Apple, Google, Hewlett-Packard, Microsoft and Research in Motion, whose distribution platforms make up the bulk of the American mobile app market, Ms. Harris noted that most mobile apps came without privacy policies.
“Your personal privacy should not be the cost of using mobile apps, but all too often it is,” Ms. Harris said at the time.
But simple disclosure, in itself, is often insufficient.
The makers of Angry Birds, Rovio Entertainment of Finland, discloses its information collection practices in a 3,358-word policy posted on its Web site. But as with most application makers around the world, the terms of Rovio’s warnings are more of a disclaimer than a choice.
The company advises consumers who do not want their data collected or ads directed at them to visit the Web site of its analytics firm, Flurry, and to list their details on two industry-sponsored Web sites. But Rovio notes that some companies do not honor the voluntary lists.
As a last resort, Rovio cautions those who want to avoid data collection or ads simply to move on: “If you want to be certain that no behaviorally targeted advertisements are not displayed to you, please do not use or access the services.”
Despite multiple requests by phone and Internet over five days, Rovio did not respond to questions.
Policy practices like Rovio’s often do little to inform consumers. Most people simply click through privacy permissions without reading them, said Mr. Hong, the Carnegie Mellon professor. His institute is developing a software tool called App Scanner that aims to help consumers identify what types of information an application is collecting and for what likely purpose.
In Europe, lawmakers in Brussels are planning to bring Web businesses for the first time under stringent data protection rules and to give consumers new legal powers, the better to control the information that is being collected on them.
Proposed revisions to the European Union’s General Data Protection regulation now before the Civil Liberties, Justice and Home Affairs Committee of the European Parliament would require Web businesses to get explicit consent from consumers to collect data. A proposal would also give consumers the ability to choose what information an app can store on them without losing the ability to use the software.
But the drafting of the revisions, which are not expected until late 2013 at the earliest, has set off a concerted lobbying battle by global technology companies, most of which are based in the United States, to weaken the consent requirements, which could undermine the advertising-
financed business models that drive many free applications

I.B.M. Reports Nanotube Chip Breakthrough

The face of an I.B.M. research scientist, Hongsik Park, is reflected in a wafer used to make microprocessors.I.B.M. ResearchThe face of an I.B.M. research scientist, Hongsik Park, is reflected in a wafer used to make microprocessors.

SAN FRANCISCO — I.B.M. scientists are reporting progress in a chip-making technology that is likely to ensure that the basic digital switch at the heart of modern microchips will continue to shrink for more than a decade.

The advance, first described in the journal Nature Nanotechnology on Sunday, is based on carbon nanotubes — exotic molecules that have long held out promise as an alternative to silicon from which to create the tiny logic gates now used by the billions to create microprocessors and memory chips.

The I.B.M. scientists at the T.J. Watson Research Center in Yorktown Heights, N.Y., have been able to pattern an array of carbon nanotubes on the surface of a silicon wafer and use them to build hybrid chips with more than 10,000 working transistors.
Against all expectations, silicon-based chips have continued to improve in speed and capacity for the last five decades. In recent years, however, there has been growing uncertainty about whether the technology would continue to improve.
A failure to increase performance would inevitably stall a growing array of industries that have fed off the falling cost of computer chips.
Chip makers have routinely doubled the number of transistors that can be etched on the surface of silicon wafers by shrinking the size of the tiny switches that store and route the ones and zeros that are processed by digital computers.
The switches are rapidly approaching dimensions that can be measured in terms of the widths of just a few atoms.
The process known as Moore’s Law was named after Gordon Moore, a co-founder of Intel, who in 1965 noted that the industry was doubling the number of transistors it could build on a single chip at routine intervals of 12 to 18 months.
To maintain that rate of progress, semiconductor engineers have had to consistently perfect a range of related manufacturing systems and materials that continue to perform at evermore Lilliputian scale.
Vials contain carbon nanotubes that have been suspended in liquid.I.B.M. ResearchVials contain carbon nanotubes that have been suspended in liquid.
The I.B.M. advance is significant, scientists said, because the chip-making industry has not yet found a way forward beyond the next two or three generations of silicon.
“This is terrific. I’m really excited about this,” said Subhasish Mitra, an electrical engineering professor at Stanford who specializes in carbon nanotube materials.
The promise of the new materials is twofold, he said: carbon nanotubes will allow chip makers to build smaller transistors while also probably increasing the speed at which they can be turned on and off.
In recent years, while chip makers have continued to double the number of transistors on chips, their performance, measured as “clock speed,” has largely stalled.
This has required the computer industry to change its designs and begin building more so-called parallel computers. Today, even smartphone microprocessors come with as many as four processors, or “cores,” which are used to break up tasks so they can be processed simultaneously.
I.B.M. scientists say they believe that once they have perfected the use of carbon nanotubes — sometime after the end of this decade — it will be possible to sharply increase the speed of chips while continuing to sharply increase the number of transistors.
This year, I.B.M. researchers published a separate paper describing the speedup made possible by carbon nanotubes.
“These devices outperformed any other switches made from any other material,” said Supratik Guha, director of physical sciences at I.B.M.’s Yorktown Heights research center. “We had suspected this all along, and our device physicists had simulated this, and they showed that we would see a factor of five or more performance improvement over conventional silicon devices.”
Carbon nanotubes are one of three promising technologies engineers hope will be perfected in time to keep the industry on its Moore’s Law pace.
Graphene is another promising material that is being explored, as well as a variant of the standard silicon transistor known as a tunneling field-effect transistor.
Dr. Guha, however, said carbon nanotube materials had more promising performance characteristics and that I.B.M. physicists and chemists had perfected a range of “tricks” to ease the manufacturing process.
Carbon nanotubes are essentially single sheets of carbon rolled into tubes. In the Nature Nanotechnology paper, the I.B.M. researchers described how they were able to place ultrasmall rectangles of the material in regular arrays by placing them in a soapy mixture to make them soluble in water. They used a process they described as “chemical self-assembly” to create patterned arrays in which nanotubes stick in some areas of the surface while leaving other areas untouched.
Perfecting the process will require a more highly purified form of the carbon nanotube material, Dr. Guha said, explaining that less pure forms are metallic and are not good semiconductors.
Dr. Guha said that in the 1940s scientists at Bell Labs had discovered ways to purify germanium, a metal in the carbon group that is chemically similar to silicon, to make the first transistors. He said he was confident that I.B.M. scientists would be able to make 99.99 percent pure carbon nanotubes in the future.