Microsoft’s Bet on Conversational Intelligence

Microsoft’s Bet on Conversational Intelligence

Hot off the heels of its huge acquisition of LinkedIn, Microsoft is betting on another, lesser known startup to give it an edge in the conversational intelligence race. Wand Labs is a tiny startup with just seven employees, but Microsoft saw enough promise in the messaging app technologies they’ve been building since 2013 to acquire Wand this past month.

So how does this acquisition fit into Microsoft’s larger strategy of moving away from being a software company to positioning itself as a nimble cloud and mobile contender? According to the announcement on Microsoft’s official blog, “Wand Labs’ technology and talent will strengthen our position in the emerging era of conversational intelligence, where we bring together the power of human language with advanced machine intelligence — connecting people to knowledge, information, services and other people in more relevant and natural ways. It builds on and extends the power of the Bing, Microsoft Azure, Office 365 and Windows platforms to empower developers everywhere.”

So what is conversational intelligence and why is it so important? We are moving into a future where we can expect to see messaging technology acting intelligently, with interfaces that allow collaborative tasks, such as song sharing or allowing your friend to control your Nest thermostat. This is part of a larger industry trend of building bots and virtual assistants that can handle the smaller tasks of life through simple voice or swipe command. Microsoft’s acquisition of Wand Labs signals their willingness to bring on new talent to move their capabilities beyond what they’ve already done with Cortana, the company’s personal assistant app.

Wand Labs was founded by Vishal Sharma, a veteran of Google who has been ahead of the intelligent apps curve for years. His expertise will be a big asset as Microsoft makes inroads in third party developer integration, semantic ontology and service mapping. Microsoft CEO Satya Nadella calls this “Conversation as a Platform,” and will be integral to the future integration of all the disparate tech we use on a daily basis. Stay tuned to see what the Wand and Microsoft team will roll out in the near future.

3D Printing Body Parts: Where Scientists Are & What Comes Next

3D Printing Body Parts: Where Scientists Are & What Comes Next

3D printing is one of the latest technological advances of the modern age but few people have even made use of 3D printers at home or in their office. Industrial manufacturing companies are tapping into 3D printing to produce everything from jet engine parts to soccer cleats, reports PricewaterhouseCoopers. Now, scientists and medical professionals are taking the lead on re-creating human tissue and body parts using 3D printing technology.

The future of healthcare and medicine may very well involve implants and tissues using 3D printing. Here’s a closer look at where scientist are now, and what is coming next:

3D Printing for Implant Surgery

Surgeons and medical professionals have been trying to find effective solutions for bone grafting and joint replacement techniques for years, often turning to a patient’s own bone and tissues as a donor or resorting to cadavers and animals for donor tissue. Many surgeons use synthetic grafting materials made with compounds that easily integrate with human bone and tissue.

With 3D printing, we would be able to manufacture bone and joint tissues completely customized for the patient. 3D printing can create real, living tissues and organs ready for implantation.

Mashable recently reported on the world’s first implant surgery using 3D-printed vertebrae. A neurosurgeon at the Prince of Wales Hospital in Sydney, Australia, treated a patient who had a tumor in his spine using a custom-printed body part created with a 3D printer.

Removing the tumor with traditional surgical methods was too risky because of its location. Without treatment, the tumor would have caused compression of the brain and spinal cord which would render the patient quadriplegic. Thanks to the 3D-printed implant, the surgeon was able to perform a successful surgery. The surgeon worked with medical device company Anatomics to create a titanium implant using 3D printing technology.

The Future of 3D Printing Body Parts

Medical research on a 3D bioprinting system that can produce human-scale tissue with structural integrity has been published in Nature. The authors highlight the fact that future developments could mean we will be able to build solid organs and complex tissues.

The Integrated Organ and Printing System (ITOP) uses biodegradable material to create tissues and water-based ink to hold cells together to recreate bio-compatible tissues. Science Magazine reports how the ‘tissue printer’ creates printed materials with live cells. The final product has a fully developed blood supply and internal structure that looks and functions just like real tissue.

These live materials could be used as transplants to complement a variety of surgical procedures. Considering that more than 121,000 people are on the waiting list for an organ transplant in the United States alone, according to the U.S. Department of Health and Human Services, 3D printing live, transplantable-tissue and organs could essentially save lives.

3D printing technology is evolving at a rapid pace and is making notable waves in the scientific and medical communities. Using synthetic grafting materials, or even resorting to metal implants for bone and tissue replacement surgeries, could soon be a thing of the past. Surgeons and scientists are developing new ways to treat patients, creating ‘living’ tissue, organs, and body parts made with bio-compatible materials and 3D printing technologies.

Photo: Wake Forest Institute for Regenerative Medicine

Nanotechnology Could Hold the Key to Self-Cleaning Clothes

Nanotechnology Could Hold the Key to Self-Cleaning Clothes

Today’s washing machines use a whopping 27 gallons of water to wash a single load of clothes. In the near future, we’ll not only be saving ourselves time and money, but also making a huge environmental step forward in how we clean clothes thanks to a new nanotech breakthrough.

New research out of RMIT University in Melbourne, Australia has developed a cost effective and efficient new method for cleaning clothes that builds the cleaner right into the garment. By growing special nanostructures capable of degrading organic matter when exposed to sunlight directly onto a textile, scientists hope to eliminate the washing process entirely.

Just imagine that spilling something on your shirt would require only stepping into the sunlight to have the shirt eliminate the stain. While this sounds like science fiction, research into smart textiles has been going on for some time now, and this latest breakthrough could have practical applications for catalyst-based industries such as agrochemicals, pharmaceuticals and natural products. The technology could also be scaled up to industrial and consumer applications in the future.

“The advantage of textiles is they already have a 3D structure so they are great at absorbing light, which in turn speeds up the process of degrading organic matter,” said Dr. Rajesh Ramanathan, lead researcher on this exciting project.

The particular nanostructures capable of absorbing light are copper and silver based varieties. When exposed to light, the structures receive a boost that makes them release “hot electrons,” which can degrade organic matter.

We’re not quite at the stage of throwing out our washing machines just yet, though. The next step is for researchers to test these nanostructures in combination with organic compounds more relevant to consumer apparel. How would these hot electrons stand up to the dreaded ketchup or wine stain?

For more on this exciting breakthrough, check out the findings, presented in the journal Advanced Materials Interfaces. Stay tuned for progress on this “strong foundation for the future development of fully self-cleaning textiles.”

Photo: RMIT University

How I-SDS Lets Enterprises Ride the Big Data Wave

How I-SDS Lets Enterprises Ride the Big Data Wave

In 2011 venture capitalist Marc Andreessen correctly predicted that software and online services would soon take over large sectors of the economy. In 2016 we can expect to see software again revolutionizing the economy, this time by eating the storage world. Enterprises that embrace this new storage model will have a much easier time of riding the big data wave. It’s no secret that data is the new king.  From the rise of big data to Artificial Intelligence to analytics to machine learning, data is in the driver’s seat. But where we’ve come up short so far is in managing, storing, and processing this tidal wave of information. Without a new method of storing this data so that it’s easy to sort, access, and analyze, we’ll get crushed by the very wave that’s supposed to carry us to better business practices. Storage’s old standby, the hardware stack, is no longer the asset it once was. In the age of big data, housing data on hardware is a limitation. Instead, a new method is emerging that allows for smarter storing, collecting, manipulating and combining of data by relying less on hardware and more on—you guessed it—software. But not just any old software. What sets Intelligent Software Designed Storage (I-SDS) apart is that its computational model moves away from John von Neumann’s longstanding model towards a design that mimics how the human brain processes vast amounts of data on a regular basis. After all, we’ve been computing big data in our heads our entire lives. We don’t need to store data just to store it—we need to have quick access to it on command. One such example of an I-SDS uses a unique clustering methodology based on the Golay Code, a linear error correcting code used in NASA’s deep space missions, among other applications. This allows big data streams to be clustered. Additionally, I-SDS implements a multi-layer multiprocessor conveyor so that continuous flow transformations can be made on the fly. Approximate search and the stream extraction of data combine to allow the processing of huge amounts of data, while simultaneously extracting the most frequent and appropriate outputs from the search. These techniques give I-SDS a huge advantage over obsolete storage models, because they team up to improve speed while still achieving high levels of accuracy.

The key to successful I-SDS rests on three pillars:

I-SDS

1. Abstraction

The ability to seamlessly integrate outdated legacy systems, current systems, future unknown systems and even component level technologies is the hallmark of an SDS with a rich abstraction layer. This allows for a rich set of data services to act upon data with high reliability and availability. It also fosters policy and access control that provides the mechanisms for resource trade-offs and enforcement of corporate policies and procedures. SDS also supports the non-disruptive expansion of capacity and capability, geographic diversity and self-service models. Lastly, abstraction allows for capabilities to incorporate the growing public/private cloud hybrid infrastructures and to optimize their usage.

2. Analytics

Analytics has become the new currency of companies. Tableau (NYSE: DATA) and Splunk (NASDAQ: SPLK) have shown the broad desire for analytic and visualization tools that do not require trained programmers. These tools made analytics and visualization available to a broad class of users in the enterprise. User experience is a key component. Simplicity with power. Cloud and mobile accessibility ensure data is available, scalable and usable anywhere and anytime. Cloud brings scale in numerous dimensions – data size, computing horsepower, accessibility, and scalability. Multi tenant with role based security and access allow the analytics and visualization to be made available to a broad set of enterprise (and partner) stakeholders. This broad set increases the collective intelligence of the system. Cloud systems that are heterogeneous and multi-tenant allow analytics that cross systems and vendors, and in some cases, customer boundaries. This increases the data set rapidly by potentially creating a much faster and more relevant set of results.

3. Action

Intelligent Action is built on the creation of full API based interfaces. Making available APIs allows extension of capabilities and the application of resources. Closed monolithic systems from existing and upstart vendors basically say, “give me your data and as long as it’s only my system I will try and do the right optimization.”  Applications and large data sets are complex; it is highly unlikely that over the 10 year life of a system that an Enterprise will not deploy many different capabilities from numerous vendors. An Enterprise may wish to optimize along many parameters outside a monolithic systems understanding, such as cost of the network layer, standard deviation of response time, and percentage of workload in a public cloud. Furthermore, the lack of fine-grained controls over items like caching policy, data reduction methods make it extremely difficult to balance the needs of multiple applications in an infrastructure. Intelligent Action requires a rich programmatic way – a set of fine grained API’s – that the I-SDS can use to optimize across the data center from application to component layer.

Into the Future

This type of rich capability is what underlies the private clouds of Facebook, Google, Apple, Amazon, and Alibaba. ISDS will allow the Enterprise to achieve the scale, cost reduction and flexibility of these leading global infrastructures. I-SDS allows the enterprise to maintain corporate integrity and control its precious data.The time has come for software to eat the storage world, and enterprises that embrace this change will come out on top.

IT Revolution: How In Memory Computing Changes Everything

IT Revolution: How In Memory Computing Changes Everything

In 2000, a relatively unknown entrepreneur at the Intel Developer Forum said he’d like to take the entire Internet, which then existed as bits on hard drives scattered around the world, and put it on memory to speed it up.

“The Web, a good part of the Web, is a few terabits. So it’s not unreasonable,” he said. “We’d like to have the whole Web in memory, in random access memory.”

 

The comment raised eyebrows, but it was quickly forgotten. After all, the speaker, Larry Page, wasn’t well known at the time. Neither was Google for that matter, as the company’s backbone then consisted of 2,400 computers.

Flash forward to today. Google has become one of the world’s most important companies, and 2,400 servers would barely fill a corner in a modern datacenter. Experts estimate that Google now operates more than 1 million servers. And the Web has ballooned way past a few terabits.Facebook alone has 220 billion photos and juggles 4.5 billion updates, likes, new photos and other changes every day.

But Page’s original idea is alive and well. In fact, it’s more relevant than ever. Financial institutions, cloud companies and other enterprises with large data centers are shifting toward keeping data ‘in memory.’ Even Gartner picked In-Memory Computing (IMC) as one of the top ten strategic initiatives of 2013.

Data Center History In the Making

Chalk it up to an imbalance in the pace of change. Moore’s Law is still going strong: microprocessors double in performance and speed roughly every two years. Software developers have created analytics that let researchers crunch millions of variables from disparate sources of information. Yet, the time it takes a server or a smartphone to retrieve data from a storage system deep in the bowels of a cloud company or hosting provider on behalf of a business or consumer hasn’t decreased much at all.

Then as now, the process involves traveling across several congested lanes of traffic and then searching a spinning, mechanical hard drive. It is analogous to having to go home and get your credit card number every time you want to make a purchase at Amazon from work.

The lag has forced engineers and companies into unnatural acts. Large portions of application code are written today to maximize the use of memory and minimize access to high latency storage. Likewise, many enterprise storage systems only use a small portion of the available disk space they buy, storing data on the outer edges of disks reduces access time. To use another analogy, it is like renting an entire floor in an office building, but only using the first fifteen square feet near the elevator so people can get in and out faster during rush hour.

IMC ameliorates these problems by reducing the need to fetch data from disks. A memory fabric based on flash can be more than 53 times faster than one based around disks. Each transaction might normally take milliseconds, but multiply that over millions of transactions a day. IMC architectures vary, but generally they will include a combination of DRAM, which holds data temporarily, and arrays based on flash memory, which is almost as fast but is persistent.

The shift will have a cascading effect. Moving from drives to flash allows developers to cut many lines of code from applications. In turn, that means fewer product delays and maintenance headaches.

The Future of In Memory Computing

Some companies have already adapted IMC concepts. Social network Tagged.com was architected under the assumption that it will always retrieve data from the memory tier. SAP’s HANA only addresses non-volatile memory. Oracle is making a similar shift with Exadata, now combining DRAM and flash into a ‘memory tier.’ To SAP and Oracle, the Rubicon has been crossed. In tests, HANA has processed 1,000 times more data in half the time than conventional databases. IMC will usher in an entirely new programming model and ultimately a new business model for software companies.

With IMC-based systems, your data center would go on a massive diet. Right now, servers in the most advanced data centers are sitting around with nothing to do because of latency: even Microsoft admits servers are in use just 15 percent of the time. Think of it: 85 percent of your computing cycles go to waste because the servers are waiting for something to do. That is a massive amount of excess overhead in hardware, real estate, power consumption and productivity.

We did some calculations on what would happen if you redesigned a data center with memory-based storage systems. You could store 40 times as much data in the same finite space. It takes 4 racks of disk storage to create a system capable of 1 million IOPS, or input/output operations per second. It would take only one shelf of a flash-based storage system. Energy consumption would drop by 80 percent since memory-based systems consume less energy and require fewer air conditioners.

The metrics around in-memory computing will continue to get better. In the future, it may be possible to produce systems with hundreds of petabytes, or systems that can hold all of the printed material ever produced times five. All of this data would be instantly available to applications allowing for faster and more accurate decision making.

A shift to In-Memory Computing will allow Big Data analytics to sing. Think again about how IMC requires software reconfiguration. Reducing excess software code will accelerate performance. Speed is absolutely crucial for predictive analytics to succeed. The Internet of Things – where inanimate objects and sensors will be collecting data about the real world all the time – will become manageable. You will know what’s going on in near real-time – rather than waiting around.

This post was originally published on Forbes.com

Here’s How Graphene Will Let Us Read DNA Directly

Here’s How Graphene Will Let Us Read DNA Directly

nistsimulate

The wonder material graphene has recently led to another exciting scientific breakthrough, this time involving the building blocks of life. Whereas the process of reading DNA has so far been a laborious, expensive, and time consuming chemical process, a new breakthrough using graphene could transform the gene sequencing industry.

 

New research from the National Institute of Standards and Technology (NIST) has simulated how DNA sequencing could become much faster and more accurate through a nanopore sequencing process: a single DNA molecule gets pulled through a tiny, chemically active hole in a super thin sheet of graphene, allowing changes in electrical current to be detected.

This method suggests that about 66 billion bases, or the smallest units of genetic info, could be identified in just one second through this method. Even more impressive, the study has found the results to be 90% accurate with no false positives. If the simulation proves as effective in experiments, this could be a huge breakthrough in several fields that utilize genetic information, including forensics.

While the concept of nanopore sequencing—pulling electrically charged molecules through a pore in a thin material—has been around for at least 20 years, using graphene as that sheet solves some of the major side effects that have hampered the process. Because of graphene’s unique chemical properties and it’s extreme thinness, four graphene nanoribbons could be bonded together to form an integrated DNA sensor. While the scientific properties at play in this process are quite complex, this video of the simulation breaks it down pretty clearly. If you’re interested in a more complex scientific explanation, check out this article from phys.org.

The major benefit of this new approach to DNA sequencing is that it would make the process much more real-world applicable. It would eliminate the need for costly computers and complex lab setups. Once NIST perfects its method and proves its success in real world conditions, we can expect to see huge strides made in DNA sequencing.

Nanotech’s Quest to Clean Up the Environment

Nanotech’s Quest to Clean Up the Environment

Nanoparticles are so small that they remain undetected by the human eye, but we interact with them in the products we use everyday. From cosmetics to sunscreen to plastics, we’ve become heavily reliant on these tiny particles to strengthen and prolong the shelf life of household products.

Another class of nanoparticles such as graphene are finding revolutionary new ways to do everything from clean nuclear waste to build better batteries to engineer stronger smartphones. So it’s no surprise that these tiny particles have embarked upon a huge new quest to clean up the environment from harmful chemicals. Read on for two exciting scientific breakthroughs that could change the way we clean up after ourselves here on Mother Earth.

1. Trap the Chemicals

When two pharmacists turned chemical researchers set out to develop nanoparticles to carry drugs to cancer cells, they never imagined that what they would discover instead was a revolutionary way to extract toxic chemicals from the ocean.

Led by Ferdinand Brandl and Nicolas Bertrand, a research team from MIT and the Federal University of Goiás in Brazil successfully demonstrated how nanoparticles and UV light can be used to isolate and extract harmful chemicals from soil and water.

Toxic materials including pesticides often resist degradation through natural processes, meaning they linger in the environment long after they’ve served their purpose. These pollutants are harmful not only to humans and animals, but they also make it harder for Mother Nature to remain self-sustaining. What if a simple process using light and microscopic particles could effectively extract and isolate these toxic chemicals from the environment?

How Brandl and Bertrand were able to achieve this feat is scientifically complex, but the concept is beautifully simple. First they synthesized polymers made from polyethylene glycol—an FDA-approved compound you’ve likely used countless times in tubes of toothpaste or bottles of eyedrops. These polymers are biodegradable.

Because of the molecular nature of these polymers, they would normally remain suspended and evenly dispersed in a solution such as water. However, what the research team found was that by exposing the polymers to UV light, the polymers exhibited a new ability to surround and trap harmful pollutants in the water. Essentially, the polymers shed their shells and then cluster together around harmful pollutants, thereby allowing for easy extraction of the bad stuff by filtration or sedimentation.

The team demonstrated how this innovative method could extract phthalates, which are chemicals commonly used to strengthen plastic. As phthalates have recently come under fire for their wide ranging potentially harmful health effects, this method for removing them from wastewater could have huge benefits. The researchers also removed BPA from thermal printing paper samples and carcinogenic compounds from contaminated soil. Not too shabby for a microscopic particle and some light rays!

This method could prove a huge breakthrough for cleaning up the environment as its effects are irreversible and the polymers used are biodegradable. The really exciting news here, according to researchers, was proof positive that small molecules can in fact adsorb passively onto nanoparticle surfaces. For a more technical description of how this process will be a huge game changer, check out this article from MIT.

2. Shake out the Contaminants

Meanwhile, researchers in the physics department at Michigan Tech have found another way to potentially use nanomaterials to clean the ocean. Using the basic scientific principle that oil and water do not mix, a team led by research professor Dongyan Zhang demonstrated a method of shaking pollutants out of liquids that could be scaled up to clean the ocean.

Unlike polyethylene glycol polymers, many nanoparticles used in commercial products like makeup and sunscreen are not biodegradable, and their effects on the ocean are a huge problem. Zhang’s team tested the shake-to-clean method on nanotubes, graphene, boron nitrite nanosheets, and other microscopic substances. They found that shaking out the contaminants from such tiny particles could be a much more effective method than mesh or filter paper.

So far the research team has successfully extracted nanomaterials from contaminated water in tiny test tubes with just a minute of hand shaking. The next step will be to figure out how to scale up this solution so it can be a viable means of cleaning the contamination out of a source of water as big as the ocean.

Scientists on the forefront of researching nanoparticles as tiny trash compactors are taking all kinds of interesting approaches to how best to clean the environment, but they all have one thing in common: the simplest methods are often the best methods, especially when it comes to complicated problems.

How Big Data is Optimizing the Classroom

How Big Data is Optimizing the Classroom

Over the past decade, data science has unlocked huge stores of information that enables companies to tailor specifically to their consumers. Big data has allowed companies like Amazon and Alibaba to create complex algorithms that can predict consumer shopping patterns and make product suggestions with a high level of accuracy. Only recently has big data made a play for influencing education with the same level of personalization. While big data is just stepping into the classroom, we can expect to see huge transformations in the next five years in how teachers teach and how students learn.

The old teaching model is outdated for today’s world. A recent study by Columbia University found vast improvements for 6,000 middle school math students in schools across the country when teacher-led instruction was coupled with personalized learning tools. The study found that this approach fostered 1.5 years of progress in math over the course of one school year, or 47% higher than the national average. Personalization is the key to higher education, where one size clearly does not fit all. Since no two students are exactly alike, neither should the tools we use to effectively teach them.

Big data is increasingly able to provide such personalization through artificial intelligence that transforms data into adaptive and customized interfaces.  Effective personalization in learning tools will come from two areas of computer learning: interfaces that learn from user actions and preferences, as well as those that learn from the overall network to make helpful inferences. Think of Netflix’s recommendations based on your past viewing history, and of Spotify recommendations based on what other similar users are streaming. By moving away from fixed lesson plans and rigid testing to adaptive assessments driven by technology, big data becomes smart data. Students become more active learners with proven results that will drive the economy. According to estimates by McKinsey, increasing the use of student data in education could unlock between $900 billion and $1.2 trillion in global economic value.

Recently Apple and IBM have turned their analytical expertise beyond enterprise to this huge untapped sector by jointly developing a Student Achievement App. The partnership will roll out real world testing in select U.S. classrooms by 2016. The app is being described as, “content analytics for student learning.” It will provide teachers real time data analytics about each of their students’ progress, ultimately transforming the educational experience from arbitrary to experiential.

College admissions offices are also harnessing the power of big data. Whereas traditionally applicants have mainly been filtered by standardized test scores, big data hopes to direct admission officers to smarter applicants who are most likely to stay for four years, graduate and go on to future success. Ithaca College, for instance, has been using social media data of applicants since 2007, when it launched a Facebook-like site for potential students to connect with each other and with faculty. Through statistical analysis of this data, admissions officers were able to see which student behaviors led to four year enrollment. In other words, user engagement signals how interested a potential student is in Ithaca College. Universities can use this data to achieve a high yield rate with lower costs. Essentially, big data provides admissions officers with a valuable measure of supply and demand.

From elementary classrooms to college campuses, big data has begun to reshape the way we learn in powerful ways. While it’s impossible to predict exactly what classrooms will look like in 2030, it’s clear that the next generation of students will learn smarter.

Is DNA the Perfect Place to Store Computer Data?

Is DNA the Perfect Place to Store Computer Data?

Nearly every aspect of our modern lives have become intertwined with computer data, so it makes sense that scientists would take this coupling one step further eventually. We are about to witness a data storage breakthrough in which digital information could be embedded into the primary fabric of our being: the double helix of DNA.

While this might sound like something straight out of a sci-fi movie, recent experiments led by Microsoft and the University of Washington, and separately by the University of Illinois have both demonstrated how DNA molecules may be an ideal basis for storing digital records. The most impressive part? Researchers say that all the world’s data could be stored in nine liters of solution. For reference, that’s a single case of wine.

While at first this may seem hard to imagine, storing data on DNA actually makes a lot of sense. After all, DNA is already an amazing data storage tool- storing all the info needed to create a healthy human being. It’s also remarkably sturdy for storing this info. Now that we can assemble synthetic DNA strands, it follows that we should be able to control what information gets stored on those strands.

DNA data storage is still in the research and development stage, but its eventual success will solve a few critical storage problems. First off, scientists believe this method could keep data safely stored for over a million years! Compared to the decades lifespan of current microelectronic data storage on disks, tape and optical solutions, this longevity would be a huge upgrade.

DNA is also a very space efficient storage method. Picture a grain of sand. A DNA molecule even smaller than that could potentially store up to an exabyte of info—or the equivalent of 200 million DVDs.

As the costs of producing synthetic DNA continue to fall, a hybrid storage solution may also be in the near future. This coupling of biotechnology and information technology would be a huge milestone in a partnership that dates back to the early 60s. After all, the first personal computer, the LINC, was developed for biomedical research purposes.

Researchers have already proven the ability to store specific data in DNA strands, and then later recall that data in digital form. To picture how this could work, imagine a file of a photo. That photo gets broken into hundreds of components that are then stored on separate DNA molecules. Researchers can encode a specific identifier that allows that picture to be put back together seamlessly when you need it again, like instantly assembling a jigsaw puzzle.

So far, the high cost of storing data in DNA is a prohibitive factor to putting this method to commercial use. But as new partnerships in biotech and computer science continue to explore this field, we’re bound to see a breakthrough within our lifetime. It’s well worth keeping an eye on, as the potential for revolutionizing how we store and retrieve information is enormous for our data driven world.

 

Big Data, Big Genes: Why I-SDS Will Lead the Data Storage Race

dna-163466_1280

Over the last decade, big data has given rise to an unprecedented bounty of information. This data has, in turn, transformed the face of industries ranging from healthcare to consumer tech to retail. All this data is definitely a good thing—for designers, scientists, policy makers, and just about everyone else—but it’s led to a unique problem.

How can we store raw data that grows more unwieldy every day? According to a 2013 study, 90 percent of all the data in the world has been generated in the preceding two years alone.

While video services like Youtube are obvious major contributors to the data tsunami, there’s another huge player emerging in the game. A recent study from PLOS Biology found that genomics—the study of gene sequencing and mapping—will be on par with Youtube levels of data by 2025. In terms of data acquisition, storage, distribution, and analysis, genomics is the next big thing in big data.

It makes sense that genomics would benefit from recent breakthroughs in data acquisition. After all, cracking the code of human DNA holds the potential to tailor individual medical treatment based on a patient’s genes. Genomic medicine has the potential to replace the one size fits all approach that healthcare has often taken in the past.

Over the last decade, the acquisition of genomic data has grown exponentially. The total amount of human gene sequence data has increased by 50 percent every seven months. And that doesn’t even take into account the estimated 2.5 million plant and animal genome sequences extracted by 2025.

The biggest driver of this upward trend? Our desire to live longer and healthier lives, free from disease. The Wall Street Journal recently covered the rising trend of employers offering free and subsidized genetic testing to employees. Screening for genetic markers of obesity and certain types of cancer takes standard medical benefits to a new level. Genetic information offers new information and informs new strategies on tackling health issues; in an era where self-tracking is the new norm, we are hungry for DNA data. This isn’t just another move for companies to offer employees more wellness perks though—it could have major cost saving benefits for employers. Obesity is a huge contributor to other costly medical procedures, so better employee health also benefits the financial health of the company.

It’s safe to say we’re going to see genomic data skyrocket in the next few years. Data storage will need to adapt to be able to house this huge amount of information so we can learn from it. The solution? Intelligent Software Designed Storage (I-SDS).

I-SDS removes the need for cumbersome proprietary hardware stacks by replacing them with storage infrastructure managed and automated by intelligent software. Software, rather than hardware, will manage and automate storage solutions. Essentially, we are moving away from an outdated computational model to one that mimics how our human brains compute massive amounts of data on a daily basis.  I-SDS will be more cost efficient and provide better methods for accessing data with faster response times. Intelligent software is the next frontier for storage if we want to reap the benefits of genomic big data.

The Biggest Airplane Innovator Since the Wright Brothers

The Biggest Airplane Innovator Since the Wright Brothers

Move over aluminum—it’s time for microlattice to revolutionize aeronautical engineering. Developed by Boeing, microlattice is the world’s lightest metal, comprised of 99.99% air amidst a series of thin, hollow struts. The 3D open cellular polymer structure makes this material incredibly lightweight. So light, in fact, that it can balance atop a dandelion!

Screen Shot 2015-11-23 at 11.06.02 AM

At the same time, microlattice is impressively strong, due to its structure that mimics that of human bones: a rigid outside coupled with a hollow, open cellular interior. It’s also less brittle than bones, designed with a compressible grid structure that allows it to absorb a large amount of energy and then spring back into shape, similar to a sponge. What’s more, microlattice floats down to the ground like a feather when dropped. Surely something with such an elegant design mirroring that of the natural world has the potential to radically alter the way we construct aircrafts, cars, and more.

 

food-eggsBoeing makes this breakthrough easy to understand with a familiar scenario that we’ve all likely done in high school science class: the egg drop challenge. The usual method for dropping an egg from multiple stories involves padding it in bubble wrap and hoping for the best. With microlattice, Boeing has essentially created a structure that could closely surround the egg and absorb all of the force of impact, without a lot of bulk. So your eggs won’t get scrambled.

 

 

In real world applications, we can expect to see microlattice replacing traditional materials used to construct airplanes and rockets. Replacing even a small percentage of the aluminum commonly used in aircrafts with microlattice could lead to significant reductions in the overall weight of the aircraft. A lighter plane requires less fuel, thereby providing a huge cost-saving incentive. With fuel being the lion’s share of airline operating costs, reducing the amount of gas needed would provide a huge cost saving measure that would trickle down to consumer prices with lower ticket prices. Most importantly, microlattice’s impressive strength and flexibility upon impact means that performance would not be hampered by lightening the load, but instead could enhance the overall durability and safety of aircrafts.

 

While microlattice was first invented by scientists at UC Irvine, HRL Laboratories and Caltech back in 2011, it’s just now coming into viable applications through Boeing’s further development. This isn’t the first time that Boeing has revolutionized aircraft engineering. With the design of the 787 Dreamliner that made its debut in 2008, Boeing introduced the first plane whose fuselage was made of one=piece composite barrel sections instead of aluminum panels. Combined with new carbon fiber materials, the 787 became the most fuel efficient plane in its class.

 

Imagine how Boeing’s ongoing innovations, coupled with microlattice, will change the aerospace game even more. With panels or sidewalls made of microlattice, commercial jets would be lighter, stronger, and more fuel efficient. It’s only a matter of time until we see this amazing new wonder material taking to the skies, and it’s likely that other earthbound applications will be discovered as well. For microlattice, the sky’s the limit.

10 Ways Graphene Will Change the World

10 Ways Graphene Will Change the World

Graphene is an amazingly strong, thin, and versatile “wonder material” that has led to over 25,000 patents since its creation in 2003. Praised by scientists for being a single layer of graphite atoms with amazing strength and conductivity, investors are just as impressed with graphene’s potentially limitless applications. Think of all the ways plastic changed the world when it was first invented in TK; now it’s graphene’s time to shine. Here are 10 major ways graphene will change the world as we know it.

1. Batteries

Combining two layers of graphene with one layer of electrolyte could be the key to getting us in battery-free electric cars within the next five years. By replacing the cumbersome and costly car battery with a graphene powered supercapacitor, scientists may have hit on the answer to the stunted growth of electric cars. Supercapacitors could lead to faster vehicle acceleration and speedy charging. Combined with the fact that they’re also smaller, lighter, and stronger than today’s electric batteries batteries, it’s clear that graphene will reshape the auto industry in coming years.

2. Healthcare

Graphene based materials have been favorably received in the biomedical field. Ongoing research into applying graphene’s unique physiochemical properties to healthcare is positioning the nanomaterial to improve treatments in a variety of ways. From stimulating nerve regeneration to being used in cancer treatment via photo-thermal therapy, graphene could change the way we heal.

3. Lighting

Combining an atomically thin graphene filament with a computer chip led scientists earlier this year to create the world’s thinnest light bulb. This is a huge feat, as light bulbs have never been able to combine with computer chips because the high heat needed to produce light has damaged the chips. However, graphene’s unique property of becoming a bad conductor at high heats allows it to transmit light without damaging the attached chip. This is going to be a huge game changer not only in home lighting, but also in smartphones and computers, where graphene will provide a faster, cheaper, more energy efficient and compact method of processing information. Let there be light!

4. Green Energy

Graphene allows positively charged hydrogen atoms or protons to pass through it despite being completely impermeable to all other gases, including hydrogen itself. This could make fuel cells dramatically more efficient . It also could allow for hydrogen fuel to be extracted from the air and burned as a carbon-free energy source. This source of water and electricity would, incredibly, produce no damaging waste products.

5. Sports Equipment

From super strong tennis racquets to racing skis, graphene has limitless potential to improve the strength and flexibility of sports equipment. It has already been utilized in cycling helmets that are super strong and lightweight. By using graphene as a composite material to strengthen traditional sports equipment, new hybrids are hitting the market for athletes to achieve the competitive advantage.

6. Bionic Materials

While this may sound like a plot from a Spiderman movie, researchers have successfully transmitted graphene onto spiders, who spun a web incorporating the nanomaterial. The result? Webs with silk 3.5 times stronger than the spiders’ natural silk—which is already among the strongest natural materials in the world. This discovery could lead to the creation of incredibly strong bionic materials that could revolutionize building and construction methods.

7. Tech Displays

Most of today’s tablets and smartphones are made of indium tin oxide, which is expensive and inflexible. Graphene is set to replace this as a thin, flexible display material for screens. This could also be a huge breakthrough for wearable tech, where flexibility is even more important.

8. Manufacturing Electronics

The recent application of graphene based inks will fuel breakthroughs in high-speed manufacturing of printed electronics. Graphene’s optical transparency and electrical conductivity make it much more appealing than traditional ink components. Thanks to its flexibility, future electronics might be able to be printed in unexpected shapes.

9. Cooling Electronics

White graphene—or graphene in a 3-D hexagonal lattice structure—could hold the key to keeping electronics from overheating. By providing better heat distribution and flow than current electronic materials used in smartphones and tablets, graphene will keep the future cool.

10. Better Body Armor

By now you know how thin, strong, and flexible graphene is. What’s more, graphene is also great at absorbing sudden impact. Researchers have found it to be 10x stronger than steel at dissipating kinetic energy, like that given off when a bullet strikes body armor. This could revolutionize soldiers’ armor, because of graphene’s unprecedented ability to distribute the impact over a large area. Researchers have also proposed using it in this way as a covering on spacecrafts to mitigate damage from orbital debris. That’s one tough nanomaterial!

 

This article was originally published on graphene-investors.com

 

How Mobile Wallet Apps are Reshaping the Ways We Pay

How Mobile Wallet Apps are Reshaping the Ways We Pay

The recent explosion of mobile payment apps could signal the end of traditional wallets stuffed with credit and debit cards. By leveraging social and mobile capabilities, as well as utilizing cloud computing and SaaS models, these mobile wallets have a leg up on traditional banks, which are often slower to innovate because of stricter regulations. A recent report by Accenture predicts that unless traditional U.S. banks learn to emulate these tech disruptors, they stand to lose as much as 35% of their market share by 2020.

 

A recent study by Nielson on mobile payments found that 40% of mobile wallet users reported it their primary method of settling the bill. Demographically, users age 18-34 account for the 55% majority of active mobile payments. Mobile payments appeal across gender and income levels, too. As more mobile payment methods move from QR codes to NFC, the convenience and ease of paying via mobile wallet apps could make it the new norm. If you’re skeptical of paying for everything with your phone, rest assured that mobile payment methods are actually more secure than using your credit or debit card, because they do not use your card number. Instead, they use a randomly generated number called a token. These tokens change with every transaction, making fraud much less likely. In the future we can expect to see a huge rise in mobile biometrics as a way to further increase payment authenticity.  In the meantime, here are four mobile wallet disruptors to keep an eye on as we head toward 2016:

1. Apple Pay

Screen Shot 2015-10-23 at 12.02.46 PM

 

 

Now available in the U.S. and the U.K., Apple Pay allows iPhone 6 or iWatch users to make retail payments via Near Field Communication (NFC). With international rollout plans in the works, Apple Pay already accepted at over 700,000 locations, including major retailers like Whole Foods, Staples, Nike, Subway, Macy’s, and of course, Apple. You can even pay for entry into U.S. national parks with Apple Pay. Apple already has deals with the four major credit card providers, and Discover recently joined as well. Retail rewards cards are also in the works, which will make it simple to automatically apply rewards in one simple checkout, providing incentives that will play a big part in the rising popularity of mobile wallets.

 

2. Android Pay

Screen Shot 2015-10-23 at 12.04.24 PM

Android Pay is also NFC enabled, and allows you to quickly pay with your default card at locations with NFC enabled checkouts. It’s currently not linked with any apps, as the Apple Pay is, but Google says they’re working on app integration. A plus side of Android Pay is that it’s available on lots of Android phones, unlike Apple, which requires an iPhone 6 or later.

 

3. Samsung Pay

Screen Shot 2015-10-23 at 12.05.16 PM

Samsung just launched its competitor to Apple and Android Pay, and it tops them both in one major way: Galaxy users can use it to pay in more stores than any other mobile payment device. Utilizing NFC and MST (magnetic secure transfer), Samsung Pay can be used at NFC enabled checkouts, and also at regular card readers through the MST feature. It’s also compatible with EMV readers, so the recent shift to EMV in the United States will pose no hassle for Samsung users. This one’s the clear winner in terms of being accepted at the most locations.

 

4. Paypal Here

Screen Shot 2015-10-23 at 12.06.41 PM

Eager to stay relevant in a sea of rapid payment innovations, Paypal just launched its latest device in the U.S. The Paypal Here Chip Card Reader enables retailers to process Apple, Android, and Samsung Pay. Because the U.S. recently upgraded to EMV—smart cards that store data on magnetic chips instead of chips, which have been popular in other countries for a few years now—Paypal’s device comes at the perfect time. In order to comply with new liability laws that took effect October 1st, many retailers will have to upgrade their systems to be able to process these mobile payments. Now you can tap, insert, or swipe pretty much any form of payment with this handheld device. The payment reader is going for $149, with an incentive program for small business owners who can earn cash back for making $3000 of sales on the device within the first three months.

The Bottom Line

These new mobile wallet options aim to make purchases easy and painless for consumers. Retailers who don’t keep pace with the changes will suffer the loss of business as the way of the financial future becomes increasingly mobile.

 

Pros & Cons of Apple’s New iPhone Leasing Program

Pros & Cons of Apple’s New iPhone Leasing Program

Apple’s latest and greatest iteration of the iPhone launches today with the iPhone 6s Plus. With upgrades to its front and rear facing cameras, as well as a new finish in Rose Gold, the lines are already forming in cities around the world.

In addition to hardware, Apple unveiled one interesting sales technique with the launch of its iPhone Upgrade Program. Apple is calling it a financing program, but it’s essentially a lease. By enticing consumers with yearly phone upgrades replete with AppleCare and the option to choose a phone plan and provider, Apple is borrowing from luxury car makers like Mercedes and BMW.

iphone-plan-box-201509

By enticing consumers to trade up to the latest model long before their old model is remotely obsolete, these luxury companies are betting on the appeal of keeping up with the Joneses—and it’s working. From a sales standpoint, Apple’s Upgrade Program cashes in on the larger tech trend of planned obsolescence, in which the latest model makes the past iteration far less desirable.

Apple’s Upgrade Program allows users to buy a new iPhone for a low monthly payment over a two-year period, or to trade up for the latest iPhone model every 12 months by adding the a monthly payment of $32 (for the 16GB 6s, rates go up from there) to a user’s monthly phone service bill.

$32 doesn’t sound like that much, right? Instead of paying for a brand new $700 iPhone, it seems like a steal. Let’s take a look at the pros and cons of Apple’s new financing program from a consumer standpoint:

Pros

If you’re someone who utilizes the full capabilities of the iPhone for either work or pleasure, it likely makes sense for you to be consistently upgrading to the latest operating system. Spreading the cost out over a number of months can lessen the financial hit.

The program includes Apple Care, which covers hardware repairs, software support, and maybe most importantly, two cases of accidental damage. So those annoying shattered screens that come from accidentally dropping your phone are no longer an issue when you’re on the Upgrade Program… as long as you can avoid being clumsy more than twice a year.

Cons

You’re signing on to higher monthly phone payments indefinitely for reasons that some might call vain and superficial. Is Apple just enabling you to lease a lifestyle you otherwise couldn’t afford?

Putting that cynical argument aside, the bigger con here is that leasing something costs more. Waiting a few months after a product’s launch allows consumers to purchase the product at a lower price, or buy a refurbished iteration of the previous generation of product for significantly less.

In order to receive a new iPhone every 12 months under the upgrade plan, you have to trade in your current phone, meaning you can’t plan on reselling it, even if it’s in good condition.

With the required two year commitment, Apple’s upgrade option allows you to get the 16GB 6s Plus for $32.45 a month. The phone costs $649 to purchase new retail, plus an additional $129 for AppleCare. So the leasing price comes out to $778.80 over the two year commitment—only 80 cents more.

While this price difference is negligible, consumers should note that buying into the upgrade program requires signing a loan contract with Citizens Bank. To be approved for this, you’ll need a strong credit score. Upgrading to a new phone every 12 months restarts your two year contract, locking you into a permanent rental state.

That’s on top of cell phone bills, which average over $100 a month on major US plans.

Why not just take advantage of the given upgrade that comes with most common cell phone plans every two years? Since phone cost is often built into a cell carrier service plan and bundled with a lower down payment, buying into the Apple Upgrade Program could mean you’re essentially paying twice for upgrades and service that should be built into a decent carrier plan.

The bottom line

Whether or not Apple’s new financing plan makes sense for you comes down to personal preference, much like the choice between buying and leasing a car. Since millennials are accustomed to paying monthly fees—renting where their parents’ generation owned—Apple is sure to find plenty of iPhone users who will buy into constant upgrades and the illusion of lower costs.

You just have to decide which is worth more: Value or style.

Photo credits: Flickr/Irudayam; Apple

*This article originally appeared on The Next Web. Check out my author page here.

Why Google Glass Broke – And How It’s Fixing Your Doctor’s Office

Why Google Glass Broke – And How It’s Fixing Your Doctor’s Office

The journey of Google Glass can teach any entrepreneur valuable lessons about brand strategy. From highly anticipated technology breakthrough, to famous retail flop criticized for its appearance, to its recent and more promising reincarnation as a business tool, Glass has seen a lot of action in its two-year life span. In a market exploding with wearable technology, how did a hands-free computer from one of the biggest tech companies on the planet flop so spectacularly?

8759961868_6581b7b16a_k

From a marketing perspective, Google made some interesting choices in launching this amazing device that may have worked against it:

No official product launch: Glass wanted to be seen as hip from the get-go, so prototypes were given to early adopters and celebrities, in the hopes that the mystique would drive consumers to happily shell out $1500 for the hot new tech-cessory. This may have been the case, but consumers were never tuned in to an actual product release date—or where they could purchase the product. Google should’ve taken a page from Apple’s playbook in creating buzz about new products with a well-known release date.

No clear brand messaging: The amazing potential of Google Glass got muddled somewhere between celebrities wearing them and consumers not knowing exactly why they needed them. The device looked sci-fi at best (or geeky at worst) and Glass’ myriad features were lost amid criticism of the frames’ appearance. Essentially, the capabilities of the product were lost amidst the noise. Instead, Google could have marketed their product’s amazing features more effectively through a clear advertising campaign.

surgery-688380_1280

  

Google Glass is now under the leadership of Tony Fadell, whose track record as Nest CEO and Apple product designer are good omens for Glass’ reincarnation. Fadell’s team took Glass’ initial failure as an opportunity to pivot the product away from a consumer market to other industries where the complex technology would be more relevant, including the doctor’s office. This summer Google issued a quiet relaunch of Glass, not as a trendy wearable device, but as a business tool equipped to save lives in the emergency room.

 

So what can entrepreneurs learn from Google Glass’ about-face?

 

Turn setbacks into opportunities: As a tool used exclusively in business settings, Google has found a way around the initial issue of privacy. Consumers were not happy with Glass’ ability to discreetly record video in public places. The new iteration of Glass will be used in business settings for internal video transmission. Picture a doctor live streaming a surgery to colleagues and medical students, or a technical engineer in the field receiving live feedback from colleagues in the office. In these cases, live-stream video will be an invaluable tool.
Learn from Criticism: Fadell has been tasked with making Glass more user-friendly and attractive. Reported updates include making the device waterproof, foldable, and equipped with a better battery. If a consumer version is relaunched in the future, Glass will likely take into account its many aesthetic criticisms, too.
Target the right audience: While it didn’t work for a consumer market, Glass has found a new home with enormous potential in the medical, manufacturing, and energy fields. According to research firm Gartner, the market for head-mounted displays is expected to reach a cumulative 25 million units by 2018. The lesson here is that sometimes what begins as a B2C product evolves into B2B applications.  

 

From its not-so-humble beginnings as a celebrity accessory to its quieter success as a lifesaving tool in the ER, Google Glass has had an interesting journey so far, with more pivots likely to come as the product continues to evolve.

 

Photo credits: Flickr/Erica Joy; Pixabay

 

Warren Buffett’s $32B Bet on the Aerospace Industry

Warren Buffett’s $32B Bet on the Aerospace Industry

According to the industry group Airlines for America, 14.2 million people are expected to travel during the 2015 Labor holiday weekend. With that number steadily on the rise, air travel is booming and has just piqued business titan Warren Buffett’s interests.

 

Buffett’s illustrious holding company, Berkshire Hathaway Inc., recently acquired Precision Castparts in an estimated $32 billion dollar deal, which is said to be the company’s largest merger to date.  Berkshire Hathaway Inc. is reported to have paid $235 per share in cash for the company, which makes metal equipment for the aerospace industry.  The merger is reportedly expected to close in the first quarter of 2016.

 

Justifying his interest in Precision Castparts to the New York Times, Buffett said, “It is the supplier of choice for the world’s aerospace industry, one of the largest sources of American exports.” With the improved economy and steady increase in air travel, Buffett’s interest is relevant. As long as air travel is on the rise, the airline industry will need more planes, which will inevitably need more parts.

10311228024_c140af1e9a_b

The merger further inches Berkshire Hathaway into the industrial sector along with other industrial acquisitions such as Marmon, an industrial manufacturer, and the chemical maker Lubrizol. Berkshire Hathaway, which is worth an estimated $62.6 billion, has a diverse portfolio of clients that include Heinz in the food sector, Burlington Northern Santa Fe in the railroad sector, General Re in the insurance sector and Fruit of the Loom in the retail sector, amongst others.

Buffett, often referred to as the Oracle of Omaha, isn’t exactly known for being a trend or momentum investor. Instead, he focuses on companies with longevity, who are at the forefront of their industries, and generate a large amount of revenue. Buffett isn’t the type to buy and sell often; he’s held stock for over 50 years.

 

Buffett made the offer at the annual Allen+Company conference with PCP Chairman and Chief Executive Mark Donegan. Buffett reportedly became aware of the company through investment manager Todd Combs’ stock in it.

 

The Portland, Oregon-based Precision Castparts was established in 1949 and makes turbine airfoils, valves, fasteners, and other products used in the defense, gas, energy and aerospace industries. They reportedly have an annual revenue of $10 billion and are used by airline giants such as Boeing and Airbus. The question to ponder is whether we should all shoot for the stars as Buffett has and invest in the aerospace industry? Buffett has profited from non-traditional moves before; after all, when the economic crises happened in 2008 Buffett made major investments in both Bank of America and Goldman Sachs. It will be interesting to see how Berkshire Hathaway’s largest investment yet compares with the rest of its diverse portfolio.

 

Graphene is White Hot in the Next Dimension

Graphene is White Hot in the Next Dimension

The wonder material graphene has recently tackled another dimension and found another exciting application for the future of technology. If your phone has ever overheated on a hot day, you’re going to want to read this.

Hexagonal boron nitride  (h-BN), a similar element to graphene known as white graphene, is an electrical insulator. Normally a 2-D material, in a newly proposed, complex 3D lattice structure, white graphene has serious heat withstanding capabilities. In most materials used to create electronic devices, heat moves along a plane, rather than moving between layers to dissipate more evenly, which frequently results in overheating. This is also the case with 2-D hexagonal boron-nitride, but not the case when this same element is simulated in a 3-D structure.

Rouzbeh Shahsavari and Navid Sakhavand, research scientists at Rice University, have just completed a theoretical analysis that produced a 3-D lattice-like white graphene structure. It uses hexagonal boron nitride and boron nitride nanotubes to create a configuration in which heat photons move in multiple directions—not only over planes, but across and through them as well. This means that electrical engineers now have the opportunity to move heat through and away from key components in electronic devices, which opens the door to significant cooling opportunities for many of the electronic items we use daily, from cell phones to massive data server storage facilities.

In an interview with Fortune, Shahsavari clarifies this process even further in an explanation about 3-D thermal–management systems. Essentially, the shape of the material, and its mass from one point to another, can actually shift the direction of the heat’s movement. Even when the heat is more inclined to go in one direction, a switch is created that shifts the direction in reverse, better distributing heat through the object.  The boron-nitride nanotubes are what enable the transfer between these layers to occur.

For most of us, this just means that in the near future we may be able to worry less about our smartphones and tablets overheating. For engineers it may mean an entirely new approach to cooling through the use of white graphene, which could potentially provide a better or alternative solution to cooling mechanisms like nanofluids. Those interested in an incredibly complex scientific explanation can read more about how this dimensional crossover works here or here.

photo credit: Shahsavari Group/Rice University

Tesla Trailblazes New Frontiers in Solar Power

Tesla Trailblazes New Frontiers in Solar Power

In 1953 Charles Wilson, former president of General Motors, remarked “as General Motors goes, so goes the nation.” This wasn’t arrogance talking, but facts: GM was the largest corporation in America, employing over 850,000 workers worldwide and capturing 54% of the U.S. auto market. The Detroit based behemoth was on the cutting edge of several automotive innovations, including the development of the first V8 engine and the first application of air conditioning in a car. A year after Charles Wilson made that remark, General Motors produced its 50 millionth car and their U.S. market shares reached 54%. Fast forward to GM’s faulty ignition switch scandal and their bankruptcy fall from grace, and we’ve seen a decoupling of General Motors and the nation at large. On the upside, this shift has allowed other innovators to enter the playing field. Chief among them is Tesla Motors, led by the entrepreneur extraordinaire Elon Musk.

 

Tesla Motors doesn’t operate on thScreen Shot 2015-07-30 at 11.22.57 AMe same scale that GM once did, but the company has leveraged its electric vehicle initiatives to spur change across the industry. For instance, in 2014 Musk announced that its technology patents could be used by anyone in good faith to speed the development of electric cars. Tesla’s latest foray into selling solar powered batteries indicates a new alliance forming between our country’s future and an automaker—one that wants the public to think of it not just as a car company, but as an “energy innovation company.” The Tesla Powerwall, which was launched this spring and is already sold out through 2016, is a rechargeable lithium-ion battery designed to store energy at a residential level for load shifting, backup power and self-consumption of solar power generation.

 

Tesla’s latest product development signals a growing focus on renewable energy, centered around resurgent solar power. Given Elon Musk’s diverse entrepreneurial background—not to mention the fact that he’s also the Chairman of SolarCity, America’s second largest solar provider—Tesla’s move from the auto industry to the energy sector makes sense. The hope for the Powerwall battery is that it helps us move off the grid with clean energy, using the sun’s power even when it’s not shining.

 

Screen Shot 2015-07-30 at 11.05.01 AMAnalysts at GSV Capital are predicting Tesla’s move into the solar battery industry to be a watershed moment because it captures these 5 key trends driving global renewable energy:

 

1. Abundance: Solar energy is starting to look like a cheaper, more viable alternative to fossil fuels.

2. Storage: Batteries continue to get cheaper and better, proving the biggest criticism of solar power—that it’s unreliable—wrong.

3. Distribution: The PowerWall allows consumers not just to buy and use batteries, but to produce and store energy for future use.

4. Intelligence: Energy tech is starting to get the same treatment as every other digitized, highly intelligent aspect of our lives. Algorithms are starting to create an “energy internet.”

5. Financing: New financing sources are emerging to promote clean tech with incentives for consumers and businesses adopting greener consumption habits. Fast Company has covered the PowerWall by the numbers extensively.

 

Solar energy currently only accounts for half of a percent of the world’s total energy consumption, but the innovations signaled by Tesla, along with those 5 trends that solar energy companies are starting to tap into, is an exciting indicator that the future of renewable energy will be shining brightly, 24 hours a day.

 

Images via Tesla

Infographic via Strom-Report

Startup Bracket Raises $85 Million to Rewrite the Cloud

Startup Bracket Raises $85 Million to Rewrite the Cloud

It’s an exciting time for the cloud computing industry. Nasdaq reported that cloud services grew by 60% last year, and according to experts, the next five years will continue to see exponential growth. But this monumental growth and market transformation does not come without risks. The increasing reliance on the cloud for storage and computing power means sending sensitive data between data centers, which exposes it to more potential points of infiltration. And due to the overlapping nature of many cloud services, once hackers get inside of a network, their reach can be vast. So before the champagne is popped, vulnerabilities must be addressed.

 

New security horror stories happen all the time now. An international hacking ring hacked 100 banks in 30 countries and stole $1 billion dollars. Hackers gained data on 70 million people when Anthem, a prominent health insurer was hacked. Home Depot was recently targeted, and hackers took credit card information for more than 50 million people. Once a hack like this happens, the damage can be devastating. Not only does Home Depot’s reputation suffer, but a hack can stop new digital initiatives in their tracks. Plus the customers have to go through the tedious task of calling their banks and re-issuing their credit cards, in order to prevent fraudulent purchases in the future. The road to recovery can be very long.

 

It’s not only companies, governments are also at risk. China is accused of being behind a recent hack on the United States Federal Government which gave them access to information on 18 million federal employees. Even America’s oldest pastime isn’t safe, with baseball teams getting hacked these days. The lesson here is that with every convenience there is a trade-off. Having access to powerful systems that exist on nearly-perfectly reliable servers has eliminated the problem of localization, downloading programs, and losing data when a computer crashes for good. At the same time, these massive databases present an attractive target for hackers and criminals who understand that gaining access to even a part of a database means an ocean of valuable information.

 

18495846450_3c5a725e2f_o

Most companies of Home Depot’s scale use highly-protected enterprise data services provided by companies like Cisco or Oracle, who are leaders in cloud services and have (generally-speaking) very secure offerings. As a result, these services are very expensive, and large companies are compelled to continue to use them because other public cloud offerings are not viewed as being secure enough. One Silicon Valley based company, Bracket Computing, has found a way to secure public cloud services enough to be able to handle sensitive corporate data.

 

In a nutshell, Bracket uses encryption wrapping to protect a company’s corporate applications, without making them harder to use. The encryption happens before the data is sent to the remote servers, and the customer is the only ones with the encryption keys, which limits exposure and vulnerabilities to a point where very sensitive customer information can be transferred and handled with a higher degree of confidence. Investors are already confident about it too: Bracket recently raised $85 million in funding from investors like Qualcomm and GE, to roll out their hyperscale cloud security solution.

 

No computer system will ever be 100% secure, but by bringing enterprise-level security to public cloud services, at least more companies should be able to confidently harness the advantages of the cloud while losing as little sleep as possible.

 

Images via ZoeeyPerspecsys

Google Street View Scales New Heights with El Capitan

Google Street View Scales New Heights with El Capitan

Only a small handful of elite climbers in the world have ever scaled the 3,000 foot vertical face of El Capitan, the most prestigious rock climbing destination in the United States. Call it a rite of passage in a sport where finding footholds in a smooth granite face is the only way to reach the top. It’s a sport reliant on carabiners and ropes—not exactly high tech stuff. Which is why Google’s recent launch of its street view technology on the face of El Capitan is so unexpected, giving non climbers a chance to virtually scale the beautiful granite monolith in Yosemite Valley.

 

So how did Google recreate its Street View feature on a 3,000 foot rock face? It’s not like a car with a camera mounted on top could just drive up the nearly vertical ascent of El Capitan. Instead, the Google Street View team solicited the help of three expert climbers, Tommy Caldwell, Lynn Hill, and Alex Honnold, who worked together to mount a tripod camera with ropes, pulleys and anchors onto the rock face at 23 different intervals along their climb. To put El Capitan’s scale in perspective, picture three Empire State Buildings stacked end to end.

 

No stranger to “El Cap,” as the rock face is affectionately called amongst climbers, Tommy Caldwell was one of two climbers to make history with the first successful free climb up the legendary Dawn Wall route of El Capitan this past January. Ascending 3,000 feet over 19 days, Caldwell and Jorgeson did not use ropes to help pull themselves up, but only to catch them if they fell. Of attempting a feat most called impossible, Caldwell said, “I love to dream big, and I love to find ways to be a bit of an explorer. These days it seems like everything is padded and comes with warning labels. This just lights a fire under me, and that’s a really exciting way to live.” Now anyone with an internet connection can get in on the excitement.

 

Getting back to the Google Street view climb: mounted by the three climbers, the camera took multiple shots from each interval, and the photos were seamlessly stitched together to create a 360 degree high definition panorama. We’ve come to know and love this seamless photo stitching technology for checking out unfamiliar neighborhoods and street views on Google maps, but instead of cars, pedestrians, and corner delis, the stunning views produced of El Capitan depict Yosemite Valley at its most stunning: sweeping views of the glacial valley from a vantage point that few have ever experienced.  For a particularly challenging stretch of the climb called the Nose route, one of the climbers carried the camera equipment on his back. His pack featured a custom rig with six small cameras angled in different directions, which automatically fired every few seconds.

 

600x753x2

For lovers of the preserved wilderness of Yosemite Valley, this is the biggest homage to its natural beauty since Ansel Adam’s famous series of black and white photos, which he began shooting in the 1920s and continued to capture for several decades. At right is his photograph of El Capitan at sunrise, shot in 1952.

 

While this is the first time Google has applied its street view technology to a climbing route, they’ve been actively moving in the direction of mapping the world, off road style. They are off to an auspicious start that would make Ansel Adams proud.

 

Top Image: Flickr/Peter Liu Photography

Graphene Helps Create World’s Thinnest Lightbulb

Graphene Helps Create World’s Thinnest Lightbulb

A group of scientists from the U.S. and South Korea recently published that they have demonstrated an on-chip visible light using graphene–a first of its kind. The group, led by postdoctoral research scientist Young Duck Kim, connected small graphene strips to electrodes. From there, the strips were placed above a substrate as they passed light through filaments, heating the strips. Kim is joined by a team that includes his James Hone-group at Columbia University as well as Seoul National University and Korea Research Institute of Standards and Science.

The full findings can be found in the group’s report, Bright Visible Light Emission from Graphene.

James Hone went on to elaborate to Phys.org how the new findings could lead to paving the way for, “atomically thin, flexible, and transparent displays, and graphene-based on-chip optical communications.” Hone attributed those potential technological advancements to what they consider a “broadband light emitter.”

Pardon the pun but the future looks bright in this sector as this helps bridge the gap in developing light circuits that emulate a semiconductor’s electric currents. By having graphene taking over the role of the filament, the team should be able to put the incandescent light onto a chip. This has been impossible to this point due to the filament’s inability to reach the needed temperatures–above 2500 degrees Celsius–to visibly glow. By having the graphene in that role, not only is the temperature issue eradicated, the likelihood of damaging a chip is reduced when using the carbon.

The group’s work continues as they try to advance the field further. At this time, their efforts are focused on characterizing device performance to determine ideal integration techniques. Hone further added, “We are just starting to dream about other uses for these structures—for example, as micro-hotplates that can be heated to thousands of degrees in a fraction of a second to study high-temperature chemical reactions or catalysis.”

2015 is shaping up to be a year for advancements in graphene research. What remains to be seen is if graphene turns out to become the material that revolutionizes several facets of innovation. It certainly is shaping up to become that sort of material. However, other “super materials” could eventually become the preferred material in nanoscale electronics. These transition-metal dichalcogenides (TMDCs) have a significant advantage over graphene in that many of the TMDCs are semiconductors. Another reality that is probably is that several 2D materials will serve as the primary materials to work with, as each has its own properties.

Regardless, graphene holds supremacy at the moment. Because of this, we should expect to see more innovation coming throughout the weeks and months. With this latest finding, graphene further entrenches itself in the fabric of innovation in the modern era. The work conducted by Kim, Hone, their team and countless other researchers could allow for significant growth in the coming years.

Image: Flickr/University of Exeter

 

Is 2015 The Year of the Graphene?

11925595493_026e8716ae_b

Move over, goats. It seems that 2015 could be graphene’s watershed moment. No longer is the thin, durable and conductive material newest nanotech flavor of the month; graphene means business.

Over the past few months, we’ve seen a plethora of stories both in scientific and mainstream media on the latest developments with graphene, ranging from the practical (long-lasting lightbulbs and efficient batteries) to the awe-inspiring (holographic projections and solar sails). Even more exciting? 2015 isn’t even halfway over.

Here are just a handful of the discoveries made this winter and spring:

February

Stronger Metal

Researchers at the Korean Advanced Institute of Science and Technology combine graphene with copper and nickel to strengthen these metals by 500 and 180 times, respectively. Using chemical vapor deposition, these researchers were able to create ultra-tough composite materials with a vast array of practical applications. Even more interesting, this was accomplished through the addition of .00004% graphene by weight to the resulting compound. Link

Flexible Electronics

Thanks to researchers at the Universities of Manchester and Sheffield, we may soon have flexible LED screens only 10 to 40 atoms thick. With a combination of graphene and 2D crystals, these scientists created a heterostructural LED device that emits light, flexes easily, and exhibits incredible toughness and semi-transparency. Link

March

Efficient Water Filters and Fuel Cells

At Northwestern University, Franz Geiger found that imperfect, or porous, graphene allows water and hydrogen-based solutions to traverse the material’s surface in highly efficient and controllable ways. Depending on the size of the perforation, anything from protons in energy transfers to water molecules can pass through a porous layer of graphene. This opens up considerable possibilities for clean tech, filtration and other functions. Link

Easier Graphene Manufacturing

Caltech scientists figure out how to manufacture graphene at room-temperature, overcoming a major hurdle to scalable production of the material. With a nod to a 1960s-era process that generates hydrogen plasma, Caltech’s David Boyd found that this technique removed both copper oxide and graphene from heated copper foil that was exposed to methane. Implemented on a grand scale, this could make graphene production far more cost-effective than previously believed. Link

Light Bulbs

It should come as no surprise that many of the biggest graphene advancements stem from the University of Manchester, the same place the material was originally discovered in 2004. Among the school’s most recent discoveries: a lightbulb with a graphene-coated filament that both lasts longer and trims energy waste by 10%. Projected to be commercially available within a year, these bulbs will likely cost less than other LED bulbs on the market. Link

April

3D Holographic Screens

With little more than an ultra-thin graphene oxide mesh and lasers, a group of international researchers were able to create a floating hologram. By projecting lasers onto the flexible mesh, an array of nanoscale pixels bent the light to display various optical effects. Star Wars-type communications may not be far off. Link

Electric Ink for 3D Printers

Multiple researchers debuted graphene-based 3D printing materials this spring. Among the more interesting, scientists at Lawrence Livermore National Laboratory and Northwestern University introduced processes that allow for the remote production of aerogels and biocompatible polymers, respectively. Link Link

Highlights from NAB 2015

Highlights from NAB 2015

As many of you already know NAB just wrapped up this past week in Las Vegas. NAB, or the National Association of Broadcasters, is a yearly convention where the biggest names in video related software come together in Las Vegas to show off their new products.

Every year I walk away from NAB with a feeling of excitement while reflecting upon the products I believe will change the fields we work in. This year, I can think of 3 products that really stood out above the rest. read more…

The 4k Phenomenon

The 4k Phenomenon

It seems that words like HD, 1080p or Blu-ray have just recently become a household term when talking about video. In fact about 96.7 percent of Americans own an HD-TV. But since technology never stops improving, the industry is constantly evolving and has already produced a new, better form of video: 4k Ultra High Definition (UHD).

What is 4k UHD?

4k UHD is defined as a video resolution of at least 3,840 x 2,160 pixels. Considering that our current HD videos peak at 1080p, you can see that the resolution has almost quadrupled and is clearly the next big step in home entertainment. 4k TVs are already on the market, ranging from a 39” TV for $500 all the way up to an 85” for $40,000. Click here for more prices. read more…

Don Basile and Crew’s Top 3 Reasons to Buy an Apple Watch

Don Basile and Crew’s Top 3 Reasons to Buy an Apple Watch

With Apple’s next conference (assumedly announcing the Apple Watch) getting closer and closer, rumors have begun to spread about what capabilities the Apple Watch will include. After doing a bit of research, we’ve come up with our top 3 reasons we want to buy one (and you should too)!

1. It’s a Fashion Statement
The first thing that we need to get out of the way is that the Apple Watch is as much a fashion statement as it is a tool. With options ranging from a less expensive fun and colorful watch that your friends will love to something a bit more expensive and professional, like the stainless steel version that will look great with your next business suit, you’re bound to be at the top of your style game. Heck, they’re even offering an 18k gold watch for all you high rollers out there! Our point is this, there will be a version of the watch for everyone to wear, and there are plenty of options to fit your unique personality. read more…

Net Neutrality, What Is It And Why Should I Care?

Net Neutrality, What Is It And Why Should I Care?

The fact is Net Neutrality is something that will drastically effect every one of us and if we don’t educate ourselves and voice our opinions on the topic, we could see drastic changes in the way we are able to use the internet today. So what is it? These videos, starting with this one by The Verge give fantastic explanations of exactly what it is, what the FCC trying to accomplish and why people are either pro or against Net Neutrality.

read more…

The Rise of the Personal Drone

The Rise of the Personal Drone

Every once in a while a piece of technology is released to the public that changes the world. Over the past two years a new product has been developed that seems to once again be doing just that. Introducing: the personal drone. Okay, so most of you have already heard of them. A lot of you have probably seen or even used one. But for those of us who haven’t (or maybe need a refresher) let’s take a look at what they are and what they’re capable of doing.

Personal Drones and Their Many Uses

DJI Phantom UAV drone seen by Don Basile

The personal drone, also known as an unmanned aerial vehicle (UAV), is essentially the RC helicopter’s older, much more advanced brother. The basics, like the ability to fly these helicopters around using a remote control are still in place, however recent advances in technologies like gyros, batteries, cameras, GPS and more have turned the old toy helicopters into something much more evolved. Out of nowhere what was once considered by many a hobbyist’s toy has now become an aerial tool! read more…

Apple and IBM Working Together?!

Apple and IBM Working Together?!

That’s right, you probably never thought you would see a headline like that but a lot has changed over the years. Back in the late 80s Apple and IBM were in a constant battle to prove whose product was better. While IBM focused more on the corporate side of things, Apple worked on revolutionizing the industry by creating a better, easier user experience.

Now both companies have decided that they need to look at things in a new light. Instead of seeing each other as competition, they have decided to work together to provide the user with an even better product than either one could offer on their own. On Dec. 10 2014, the companies released the first product formed as a result of their new partnership! read more…

Is Warren Buffett’s Duracell purchase the right move?

Is Warren Buffett’s Duracell purchase the right move?

Last week it was announced that Warren Buffett’s holding company, Berkshire Hathaway, plans to buy Duracell from Procter & Gamble. Buffett’s firm will pay around $5 billion for the company which raises the question… Is it worth it?

According to P&G, Duracell has 25 percent of the global market using its product. But this alone doesn’t make it a great investment. In fact, Duracell has been underperforming, compared to past performance in the last ___ years. More likely than not, Duracell appeals to Buffett because it owns and sells a product that consumers need to purchase again and again.

That’s where the problem arises. More and more products are being sold with rechargeable batteries built in. The “batteries not included” label isn’t as common as it used to be. Just look at your cellphone, cameras or laptops. As technology progresses, the majority of products don’t require separate disposable batteries.

batteryDuracell needs to make a change because the age of the disposable battery is slowly coming to a close. This is where Buffett can help the company. With the already-established name they’ve made for themselves, Duracell now needs to refocus and look into new opportunities. For example, they have already been developing lithium-ion batteries, giving them the upper hand in the rechargeable battery market. Focusing on that technology and really pushing it to the next level could be huge for them.

So back to the question, will the deal be worth it to Warren Buffett? In my opinion, yes, as long as Buffett has a plan to change things within the company and evolve with the technology. Will it happen? We don’t know for sure, but we do know that there is potential and lots of hard workers on both Duracell and Warren Buffett’s teams. Give it a little time and I’m sure we’ll see Duracell back on top in the long run.