The Growing Field of Blockchain Forensics

The Growing Field of Blockchain Forensics

Blockchain technology has been making waves in the tech industry since it first emerged with the development of Bitcoin in 2008. It offers a decentralized and secure way to store and transfer data, making it an attractive option for industries ranging from finance to healthcare. However, the anonymity and decentralization that make blockchain technology so appealing also make it an attractive target for criminals. This is where blockchain forensics comes in.

 

Blockchain forensics is the field of analyzing blockchain transactions to uncover illegal activities such as money laundering, drug trafficking, and cybercrime. It involves using various tools and techniques to analyze the blockchain and track down the individuals or organizations involved in these activities. The growing field of blockchain forensics is becoming increasingly important as more businesses and organizations adopt blockchain technology. According to a report by Grand View Research, the blockchain forensic market is expected to reach $11.6 billion by 2028, driven by increasing concerns over the use of cryptocurrencies in illegal activities.

 

One of the key tools used in blockchain forensics is blockchain analysis software. This software allows forensic investigators to track transactions on the blockchain and analyze the patterns and behaviors of users. By examining the transactions associated with a particular wallet address, investigators can often determine the identity of the owner or user of that address.

 

Another important aspect of blockchain forensics is the use of metadata. While the blockchain itself is anonymous, it is possible to gather metadata from other sources, such as social media or IP addresses, to link specific transactions to individuals or organizations. By analyzing this metadata, investigators can build a more complete picture of the individuals or groups involved in illegal activities.

 

The role of blockchain forensics is not limited to law enforcement agencies. Many businesses and organizations are also turning to blockchain forensics to ensure that their own blockchain networks are secure and free from illicit activity. By analyzing the behavior of users on their networks, businesses can identify potential vulnerabilities and take steps to prevent fraudulent activity.

 

Blockchain forensics also plays a key role in regulatory compliance. With the growing adoption of blockchain technology in various industries, regulators are increasingly concerned about the potential for cybercrime. By monitoring blockchain transactions and identifying suspicious behavior, regulators can ensure that businesses are complying with relevant laws and regulations.

 

Despite its growing importance, blockchain forensics is still a relatively new field, and there are many challenges to overcome. One of the biggest challenges is the lack of standardization in blockchain data. Unlike traditional financial transactions, blockchain transactions can take many different forms, and the data associated with these transactions can be difficult to interpret. Another challenge is the issue of privacy. While blockchain forensics can be a powerful tool for identifying criminal activity, it also raises concerns about privacy and civil liberties. As blockchain technology continues to evolve, it will be important to strike a balance between security and privacy.

 

The growing field of blockchain forensics is becoming increasingly important as more businesses and organizations adopt blockchain technology. With the potential for illegal activity on the blockchain, it is important to have the tools and techniques to analyze blockchain transactions and identify potential threats. While there are many challenges to overcome, the field of blockchain forensics will undoubtedly continue to grow in importance in the coming years.

Social Media Platforms’ Continuing Embrace of NFTs

Social Media Platforms’ Continuing Embrace of NFTs

It was almost one year ago that social media platforms such as Twitter began supporting non-fungible tokens (NFTs). NFTs are unique cryptographic tokens that cannot be replicated. They essentially show ownership of a certain digital item, like an image. Twitter, along with other platforms like Facebook and Instagram, have been working toward allowing their users to display NFTs as profile information. If you were to click on this information, you would be taken to a place where additional information is offered. This is very much a digital status representation that allows people to show off their membership or digital identity.

 

Reddit is the latest social media platform to embrace NFTs, with avatars available for purchase using fiat currency. Reddit requires a five percent portion of each transaction. However, Reddit refuses to acknowledge these NFTs for what they are. While they are part of the Polygon blockchain, they will be referred to as collectible avatars moving forward on this platform. Reddit has expressed that they view blockchain as a method of empowering and creating a more independent community on their site.

 

Facebook had begun allowing certain creators to showcase their NFTs in the summer of 2022 as well. This information is included on a digital collectibles tab in a user’s profile. This was shortly after Instagram (run by the same company) enabled NFTs to be shared, a surprising move given the fact that crypto products recently saw a decline in popularity. They had their worst performance ever in June 2022, when NFT sales only reached just over $1 billion.

 

In addition to social media platforms, there are many brands that are embracing NFTs, such as Gucci, Adidas, Nike and Bose. This is a very different concept, as awareness is made through different postings within channels, not with the use of ads.

 

In the future, we might see a shift away from branding as community becomes more important. Content is still very relevant, but there will also be more conversation between communities and within communities. Transparency will be necessary as a learning curve develops for people who are just embracing NFTs. Unless you know some elite creators and collectors, you may not have even noticed NFTs making their way onto these platforms, but you may start to see more and more of them popping up over time.

 

We’re at a point where people want to have ways of showing off their status to others. NFTs are just one new method of doing so. This is a way to display membership and status within different identity groups. The specific social media platform that you’re talking about utilizes NFTs in different ways.

 

On Instagram, status may be achieved by way of fashion or photo skills. Twitter is all about making a big statement using a small number of words. Your wit and your knowledge are important. TikTok focuses on video, music and performing. On a positive note, social media platforms’ continuing embrace of NFTs may encourage a lot more authenticity than we’ve seen recently. Instead of just posting photos that attempt to generate a certain status for oneself, there is a lot more proof that comes from the investment of NFTs. 

 

How Blockchain Could Alter the World of Content Creation and Distribution

How Blockchain Could Alter the World of Content Creation and Distribution

Spencer Dinwiddie has over the course of his eight seasons in the National Basketball Association (NBA) established himself as a solid player. But it is entirely possible that he will make an even greater impact off the court.

Dinwiddie, who began the 2022-23 season with the Dallas Mavericks (the fourth NBA team to employ him), co-founded the blockchain-based platform Calaxy.com in 2020 with tech entrepreneur Solo Ceesay. The site, which remains in the beta stage, is designed to provide fans greater engagement with content creators like Ezekiel Elliott, a running back with the Dallas Cowboys of the NFL, and R&B star Teyana Taylor.

The site is one of many to emerge in recent years, and a continuation of the trend that began with the emergence of non-fungible tokens (NFTs). It has become readily apparent that content creators around the world – and there are 50 million of them, by one estimate – can peddle their wares courtesy of blockchain, a decentralized, immutable online ledger that records transactions. Most often associated with cryptocurrency, it has shown that it has value in a great many sectors, including healthcare, elections, real estate and supply chain management. 

In this case it affords artists, photographers, designers, bloggers and social media specialists a great opportunity to monetize their work, while at the same time protecting such creations from scammers and plagiarists. As Dinwiddie told the website For the Win in 2021:

“The people who generate the content are the ones generating the value. It’s the same thing with the NBA. There is inherent value for making a transaction seamless or making it so where you consume the content is easy to find. It’s very nice to know I can go to Barclays Center to watch (star players) Kevin Durant, James Harden and Kyrie Irving. But at the end of the day, they’re the ones who are creating the value and they deserve a lion’s share of the profit. The reason why things like that don’t typically happen is because the power has been in legacy systems.”

That power is becoming far more diffuse. Blockchain-powered platforms like Wildspark, Visme, Privi, Steemit, Theta and Pixsy protect content creators from bad actors by giving them a verifiable digital ID. There is no censorship, and such sites afford these creatives an opportunity to build an online community, given the direct access they have to their audiences.

But the big thing is monetization. Carolyn Dailey, founder of a site called Creative Entrepreneurs, told the website Courier.com that such platforms enable creators to “sell directly to their communities, without going through traditional ‘routes to market,’ (which) also take a cut of their profits.”

Dailey pointed out that there is no need to use galleries, music producers or record labels, while at the same time noting that the potential drawbacks are blockchain’s hazy future, investor uncertainty and the potential tax implications for creatives.

At the same time, the upside is readily apparent. One painter, Sasha Zuyeva, told Courier that the interest in his work has “clearly grown.” And Dinwiddie believes “the sky’s the limit.” So while it would be a mistake to believe blockchain can solve every problem faced by content creators, it certainly represents a giant leap forward.

The Unsettling Truth About The Worldwide Tech War

The Unsettling Truth About The Worldwide Tech War

Even before the economic turmoil now exacerbated by the Russian-Ukrainian war, the U.S. and China have been locked in a technology battle that has raised concerns not only in Washington, DC but throughout the country over economic competition and national security. While the U.S. leads in software and semiconductors, China clearly comes out on top when it comes to smartphones and the 5G network, as well as in artificial intelligence (AI), machine learning, and a wealth of other technologies. As the Chinese technology market increasingly distances itself from the West, this feeds existing tensions between China and the U.S.

According to a post on The Diolomat.com, the U.S. government, in an effort to constrain or relay China’s technology advancements, has taken a series of measures and sanctions against Chinese tech companies, since President Biden took office, Congress, the government, and several key think tanks have released 209 bills, policies, and reports concerning science and technology policies toward China. Huawei, one of China’s preeminent communications manufacturers, is one of the largest targets of U.S. sanctions. In 2020, a new regulation prohibits any entity from supplying chips with U.S. technology to Huawei, yet the company continues to generate revenue. The diversity of the business environment reflects the complexity of the China-U.S. economic and trade relationship. While the two countries are independent, nonetheless, they cannot completely sever ties, and economic and trade sanctions will bring huge losses to both.

Jacob Helberg, who led Google’s global internal product policy efforts to combat disinformation from 2016 to 2020, has written a book entitled “The Wires of War: Technology and the Global Struggle for Power,” in which he calls the power struggle between China and the U.S. a “gray war.” 

Helberg told Axios.com there is a battle to control what users see on their screens, including information and software, and a “backend” battle to control the internet’s hardware, including 5G networks, fiber-optic cables, and satellites. He believes that this technology-driven war will influence the balance of power for the coming century, as without a solid partnership with the government, technology companies are unable to protect democracy from autocrats looking to sabotage it, from Beijing to Moscow and Tehran. To win this skirmish, Helberg suggests using trade policy and alliances to form a free and secure internet and information infrastructure, and the capability to levy what he calls “cyber sanctions” that restrict access to technologies and platforms controlled by hostile foreign governments.

America may believe that it can maintain its technological edge, but China is spending a significant amount of money on high-tech research. According to a post on ABC.net, China has announced a five-year plan worth $1.8 trillion to monopolize AI, robotics, 6G, and other new technologies by 2035.

James Green, former minister for trade affairs at the U.S. embassy in China, predicts that the tech war that grew during the Trump administration is not going to be resolved any time soon.

“Some of the issues, particularly around technology and technology ecosystems, are ones that will be with us for years to come,” he says.

How AI is Impacting Data Centers

How AI is Impacting Data Centers

In 2018, Gartner Distinguished VP Analyst David Cappuccio wrote a piece entitled “The Data Center is Dead,” in which he predicted that because of the continuing rise of such things as cloud providers, Software as a Service (SaaS) and edge services, some 80 percent of enterprises will shut down their traditional data centers by 2025.

Certainly the world’s seven million data centers, labelled “the building blocks of our online world” in an October 2020 piece on the website Bigstep.com, have continued to evolve since then. That is in no small part due to the pandemic, which has accelerated automation. Artificial intelligence is central to that, as it makes it possible to detect possible risks, heighten energy usage, ward off cyberattacks and even bolster on-the-ground security.

AI is, in the words of Jensen Huang, CEO of the computer systems design services company Nvidia, “the most powerful technology force of our time,” and other business leaders appear to have taken that to heart. Some 83 percent of organizations increased their AI/machine learning budgets from 2020 to 2021, well aware of the benefits AI can provide. As it was put in one blog post, no other technology improves the efficiency of a data center to quite the degree of AI. That was demonstrated by MIT researchers who devised an AI system that hastened processing speeds by as much as 30 percent, and by an HPE deep-learning application that identified and resolved bottlenecks.

Some of the most promising uses of AI to date have come in the area of energy conservation, which is of no small concern, considering data centers consume about three percent of the world’s electricity, and produce about two percent of its greenhouse gasses. And with more and more data being churned out by the year, it is estimated that data centers will gobble up 10 percent of the world’s energy by 2030.

Google showed how AI can be used to curtail this trend, when in 2014 it used its Deepmind AI to reduce the energy used for cooling by 40 percent at one of its data centers, and achieve an overall reduction in power usage effectiveness (PUE) overhead of 15 percent.

AI also defends against cyberthreats by discerning any changes in normal network behavior, and can even powers robots that patrol the grounds outside data centers.

It is inarguable, then: AI usage in data centers is on the rise, and as mentioned that is part of the larger trend toward automation. According to Mordor Intelligence, the data center automation market, which stood at $7.34 billion in 2020, is expected to be valued at $19.65 billion by 2026, a compound annual growth rate of 17.83 percent.

While Dave Sterlece, an executive at the automation company ABB Ltd., told Datacenterdynamics.com that such innovation is “not hugely widespread yet,” he called that which has emerged to date “exciting” and added, “The potential is there.”

It seems certain to be realized in the years ahead, as data centers continue their evolution, and continue to power the online world.

A Breakthrough in Graphene-Based Water Filtration

A Breakthrough in Graphene-Based Water Filtration

Two new graphene-related developments this year — one by Brown University researchers in January and another by MIT researchers in August — showed that this one-atom-thick layer of carbon might offer promise as a means of water filtration. And if it someday proves scalable, it could go a long way toward alleviating the worldwide water shortage, and countering mankind’s unrelenting befouling of the planet’s waters.

The Brown team discovered an inventive way to vertically orient the nanochannels between two layers of graphene, thus making those channels passable for water molecules but little else. These channels normally have a horizontal orientation, but the researchers found that by taking the graphene, stretching it on an elastic substrate and then releasing it, wrinkles could be formed.

Epoxy was then applied to hold this material — labelled VAGME (vertically aligned graphene membranes) by the team — in place. Brown engineering professor Robert Hurt, who co-authored the research, summarized these findings as follows, in a release on the university’s website:

“What we end up with is a membrane with these short and very narrow channels through which only very small molecules can pass. So, for example, water can pass through, but organic contaminants or some metal ions would be too large to go through. So you could filter those out.” 

The MIT team, meanwhile, found that graphene oxide foam, when electrically charged, can capture uranium and remove it from drinking water. This is a particularly valuable discovery, given the fact that uranium is continually leaching into reservoirs and aquifers from nuclear waste sites and the like, a development that can lead to various health issues among humans. 

MIT professor Ju Li noted in a release on the MIT website that this filtration method can also be used for metals such as lead, mercury and cadmium, and added that in the future passive filters might give way to smart filters “powered by clean electricity that turns on electrolytic action, which could extract multiple toxic metals, tell you when to regenerate the filter, and give you quality assurance about the water you’re drinking.”

Graphene oxide has also been used by a British company, G2O Water Technologies, to enhance water filtration membranes, and led in July of this year to that organization becoming the first of its type to land a commercial contract.

The effectiveness of graphene-oxide membranes was first seen in 2017 at the UK’s University of Manchester, when researchers discovered that they could not only sift out impurities, but also salt — meaning that it could potentially be used to desalinate seawater.

As of 2018, just 300 million people around the world obtained at least some of their drinking water from desalination, and the process is often seen as costly and energy inefficient. Researchers at Purdue University announced in May of this year, however, that they had developed a more energy-efficient method of reverse osmosis, the most widely used type of desalination, a promising advancement indeed.

There is also the issue of pollution, which as mentioned is a considerable one. Studies have shown that 80 percent of the world’s wastewater is dumped back into the environment, and that little of it is treated — that it could be fouled by the aforementioned metals, as well as nutrients from farm runoff, plastics, etc.

There is no simple way to deal with such a deep-seated problem, but certainly graphene offers a potential solution.

Are NFTs Worth the Environmental Cost?

Are NFTs Worth the Environmental Cost?

Not that any further proof was needed, but the night of March 27, 2021 offered definitive evidence of the impact non-fungible tokens (NFTs) are making — that besides transforming the world of digital art, they are even veering toward the mainstream.

On that night’s edition of “Saturday Night Live,” cast members Pete Davidson and Chris Redd joined musical guest Jack Harlow in a parody music video inspired by the song “Without Me,” by the rapper Eminem. And their subject matter was in fact NFTs, which were described as “insane,” since they are “built on a blockchain.”

“When it’s minted,” Harlow sang, “you can sell it as art.”

It was breezy and funny and maybe even a little educational. Who knew that we could be entertained while learning about digital tokens (essentially certificates of ownership) that allow artists to peddle their virtual wares via blockchains? 

The reality about NFTs is far from a laughing matter, however, and from an environmental standpoint, even somewhat dire. Yes, NFTs open digital frontiers to artists often shut out of the legacy market, a $65 billion business where artists are able to sell their creations.

That said, things have meandered in curious directions, since these tokens can be assigned to any unique asset, including the first tweet by Twitter head Jack Dorsey, a rendering of Golda Meir, the late Israeli prime minister, and even a column on NFTs by the New York Times’ Kevin Roose. A major league baseball player named Pete Alonso, first baseman for the New York Mets, has even issued one, in hopes of providing financial support to his minor league brethren.

But as mentioned, the considerable downside is the carbon footprint made by NFT transactions. Memo Akten, an artist and computer scientist, calculated in December 2020 that the energy consumed by crypto art is equivalent to the amount used during 1.5 thousand hours of flying or 2.5 thousand years of computer use.

Others in the field have called NFTs an “ecological nightmare pyramid scheme,” and a recent post on the website The Verge explained why: The marketplaces that most often peddle NFTs, Nifty Gateway and SuperRare, use the cryptocurrency Ethereum, from the platform by the same name. This decentralized digital ledger uses “proof of work” protocols, which require users (a.k.a., “miners”) to solve arbitrary mathematical puzzles in order to add a new block, verifying the transaction.

This process is energy inefficient, and purposely so. The thinking is that hackers will find that this energy expenditure will not be worth their while — i.e., it acts as something of a security system, since these ledgers are not subject to third-party control, which would handle things like warding off cybercriminals.

Akten is of the opinion that a “radical shift in mindset” is in order, and writes on the website FlashArt that the future Etherum 2.0 represents a step in that direction, as it uses energy-efficient “proof of stake” protocols, where mining power is based on the amount of cryptocurrency a miner holds. 

“That would essentially mean that Ethereum’s electricity consumption will literally over a day or overnight drop to almost zero,” Michel Rauchs, a research affiliate at the Cambridge Centre for Alternative Finance, told The Verge.

There are those who caution, however, that the full potential of Ethereum 2.0 is “still years away” from being realized, and others who believe that its shift toward a proof-of-stake model will lead to the platform’s demise.

There is a solution available now! Bitcoin Latinum has brought that green Proof of Stake to the Bitcoin ecosystem and is available for minting on platforms like the Unico NFT platform

There also are other options. There is the practice of lazy mining, where an NFT is not created until its initial purchase. There are sidechains, where NFTs are moved onto Ethereum after they are minted on proof-of-stake platforms. There are bridges, which involve interoperability between blockchains.

As with everything else, there is also the possibility of using clean energy sources in the mining process. According to a recent post on Wired, they could power 70 percent of those operations. The counterargument is that if clean energy is used in that fashion, less of it will be available to fulfill other demands.

There are also those, like Joseph Pallant, founder and executive director of the Vancouver, BC-based nonprofit Blockchain for Climate Foundation, who believe that the solution to the problem lies with Ethereum’s continuing evolution. Pallant also wonders if the platform’s energy usage is overblown. 

That would appear to be a minority opinion, however. Most observers believe NFTs, for all the opportunities they offer artists, need to become a better version of themselves — that their energy needs will need to be addressed in some fashion. More than likely, some combination of the above solutions will prove effective, but whatever the case, the need is clear. This is a problem that isn’t going away, and it will require some degree of resourcefulness to solve it.

Data, the “New Oil,” Cannot Be Used If Left Unrefined. Thankfully, AI Can Help.

Data, the “New Oil,” Cannot Be Used If Left Unrefined. Thankfully, AI Can Help.

British data scientist Clive Humby was famously quoted in 2006 as saying that “data is the new oil,” a statement that has been over-analyzed and occasionally criticized, but one that nonetheless retains its merits all these years later.

Soon after Humby made that statement, Michael Palmer, executive vice president of the Association of National Advertisers, amplified the point in a blog post, writing that data cannot be used if it is left unrefined. Data, he asserted, is not fact, any more than fact is insight. Context is vital in order to draw conclusions from any information that is gathered. And, he added:

The issue is how do we marketers deal with the massive amounts of data that are available to us? How can we change this crude into a valuable commodity – the insight we need to make actionable decisions?

These questions are even more consequential nowadays, given the exponential increase in the amount of data in recent years. According to Statista, some 74 zettabytes of data will be created, copied, captured, or consumed around the world in 2021, 72 more than in 2010. Even more tellingly, this year’s total is expected to double by 2024.

As a result, it is incumbent upon organizations to employ tools like artificial intelligence and machine learning, which enable them to ingest and process all this information, and in turn, gain insights that will make possible better decision-making and ultimately improve the bottom line.

Specifically, the convergence of Big Data and AI enables integration and management to be automated. It allows for data to be verified, and some advanced AI even makes it possible to access legacy data. The resulting increase in efficiency could lead, according to a 2018 McKinsey study, to the creation of between $3.5 trillion and $5.8 trillion per year in value — and perhaps as much as $15.4 trillion — across no fewer than 19 business areas and 400 potential use cases. McKinsey also concluded that nearly half of all businesses had adopted AI, or were poised to do so.

They include, not surprisingly, tech giants like Amazon, Microsoft and Netflix, but also such widely divergent organizations as BNP Paribas, the world’s seventh-largest bank, and the energy company Chevron, among many others.

BNP Paribas Global Chief Information Officer Bernard Gavgani described data as being “part of our DNA” in an interview with Security Boulevard and noted that between September 2017 and June 2020, the bank’s use cases increased by 3.5 times. The goals, he added, were to improve workflows and customer knowledge, two examples being a scoring engine that enables the bank to establish credit and an algorithm that increases marketing efficiency.

Chevron, meanwhile, increased its productivity by 30 percent after the implementation in 2016 of a machine learning system that was able to pinpoint ideal well locations by incorporating data from the performance of the company’s previous wells. More recently, the company entered into a partnership with Microsoft, enabling them to use AI to analyze drilling reports and improve efficiency.

The latter is proof that data is not only the new oil, but that it can help find it, too. But the larger point is something else Palmer mentioned in his follow-up to Humby’s long-ago assertion — that data is merely a commodity, while insight is the currency an organization can use to drive growth. You need the best tools to drill down and gain that insight, and in this day AI and ML represent the best of the lot.

Data Centers Clean Up Their Act (and Find a New Rhythm)

Data Centers Clean Up Their Act (and Find a New Rhythm)

The 2017 song “Despacito,” by Puerto Rican artists Luis Fonsi and Daddy Yankee (and later re-mixed with no less a talent than Justin Bieber), remains one of the most popular songs of all time. 

Its music video surpassed three billion YouTube views in record time, and by the end of 2019 it had been viewed no fewer than six billion times. Vulture called the song, the title of which means “slowly” in English, “a sexy Spanish sing-along” featuring “catchy refrains and (an) insistent beat.” And according to NPR it is “the culmination of a decade-long rise of sociological and musical forces that eventually birthed and cemented a style now called urbano.”

It is also, sadly, a contributor to the worldwide environmental crisis. Fortune noted last year that a mere Google search for “Despacito” activates servers in six to eight different data storage centers around the globe, and that YouTube views of the video — which were at five billion at the time the piece appeared — had consumed the energy equivalent of 40,000 U.S. homes in a year.

In all, data centers, the very backbone of the digital economy, soak up about three percent of the world’s electricity, and produce about two percent of its greenhouse gasses. And with data exploding — there were 33 zettabytes of it in 2017, and there are expected to be 175 by 2025 — these issues are only expected to become more acute. It is estimated that by 2030, data centers will consume 10 percent of the world’s energy.

“Ironically, the phrase ‘moving everything to the cloud’ is a problem for our actual climate right now,” said Ben Brock Johnson, a tech analyst for WBUR, a Boston-based NPR affiliate.

Thankfully, tech giants are already in the process of dealing with the problem. Sustainable data centers are not only a thing of the future; they are a thing of the present. In other words, the cloud has become greener, in the hope that it will become greener still.

That’s reflected in the fact that Microsoft’s stated goal is to slice its carbon emissions by 50 percent in the next decade; that Facebook bought more renewable energy than any other company in 2019 (and was followed by Google, AT&T, Microsoft and T-Mobile, in that order); that in April of this year Google introduced a computing platform it labelled “carbon intelligent;” and that Amazon Web Services hopes to be net-carbon-zero by 2040, a decade earlier than mandated by the Paris Agreement.

As Microsoft president Brad Smith told Data Center Frontier, his company sees “an  acute need to begin removing carbon from the atmosphere,” which has led it to compile “a portfolio of negative emission technologies (NET) potentially including afforestation and reforestation, soil carbon sequestration, bioenergy with carbon capture and storage (BECCs), and direct air capture.”

The initial focus, Smith added, will be on these “nature-based solutions,” with “technology-based solutions” to follow. The net effect will be the same, however — a greener cloud. It comes not a moment too soon, as reflected in the aforementioned statistics, as well as the fact that the U.S. alone was responsible for about 33 percent of total energy-related emissions in 2018, or that by 2023 China’s data centers are expected to increase energy consumption by 66 percent.

How it happens

As mentioned, the commonplace use of technology in our society, while seemingly harmless, contributes to these emission levels. Everything from watching that “Despacito’ video to uploading a picture to scrolling through your Twitter feed involves data centers. 

While electricity might on the surface seem distinct from other emission sources, the reality is that they are as much fueled by the same resources as other industries. Coal, natural gas, and petroleum are the primary resources used to power electricity. For example, according to a Greenpeace study, 73 percent of China’s data centers are powered by coal.

But it’s not just the fact that data centers require electricity to run. The sheer amount of energy produced by data centers means that they produce a lot of heat, and cooling systems must be put in place to counteract that. 

On average, servers need to maintain temperatures below 80 degrees to function properly. Often, cooling makes up about 40 percent of total electricity usage in data centers. So altogether, the reliance on fossil fuels and nonrenewables alongside necessary cooling systems are what ultimately cause the emissions from data centers.

Where to go from here

Many top-tier tech companies have been taking their cues from Susanna Kass, a member of Climate 50 and the data center advisor for the United Nations Sustainable Development Goals Program. Blessed with 30 years of data-center experiences herself, Kass applauds the sustainability initiatives launched by such companies, and believes there will in fact be an escape from the “dirty cloud,” as she calls it, that has enveloped the industry.

In addition to the aforementioned steps she said these centers need to curtail the practice of over-provisioning, which commonly involves providing one backup server for every four that are running. She adds that coal obviously needs to be phased out as a power source for these centers, and that carbon neutrality must continue to be the top priority.

“The goal,” she told The New Stack, “is to promote better digital welfare as we evolve into the digital age.”

Indeed there is no other choice, given the fact that recent reports indicate we only have until 2030 to stop the climate catastrophe. With great (electric) power, comes great responsibility, and it seems the tech giants are now heeding that call. 

What Makes a Great Nanomaterial?

What Makes a Great Nanomaterial?

By this point, we have discovered a slew of natural, incidental, and artificial nanomaterials with a wide variety of properties and capabilities. Particularly prominent are graphene and borophene — i.e., one-atom-thick layers of graphite and boron, respectively — which have been widely hyped in recent years.

Graphene, discovered only in 2004, has alternately been labelled “the most remarkable substance ever discovered” and a substance that “could change the course of human civilization.” Not to be outdone, borophene, first synthesized in 2015, has been dubbed “the new wonder material,” as it is stronger and more flexible than even graphene.

Certainly there is a place for both in a wide variety of areas (not the least of which are areas like electronics and robotics), but the full extent of their capabilities is still being explored. And certainly their versatility has set them apart from other nanomaterials, like nanoenzymes, which have found particular application in the medical field (specifically, in tasks like bioimaging and tumor diagnosis), or the membranes that are used for water purification. 

Yet, these nanomaterials (and many others) have yet to reach their full potential. 

What factors determine the effectiveness of a nanomaterial for human use? What makes a truly great nanomaterial? The answer largely comes down to the material’s capability, functionality, and scalability.

Capability

How important a nanomaterial’s capabilities actually are depends on how it can be applied, and seeing as how the range of applications for various materials is endlessly vast, you could very well say that any nanomaterial is more than capable of fulfilling some kind of function. But ultimately, utilizing that many nanomaterials is more cumbersome and inefficient than anything — meaning that by discovering and utilizing a few nanomaterials with strong and multifaceted capabilities, we can more rapidly make advancements and create new nanotech.

The most produced nanomaterials to date are carbon nanotubes, titanium dioxide,  silicon dioxide and aluminum oxide, and are great examples of varied capability. Carbon nanotubes are most often used in synthetics. Titanium dioxide is used for paints and coatings, as well as cosmetics and personal care products. Silicon dioxide is used as a food supplement, and aluminum oxide is used in various industries.

The distinction for these four nanomaterials is that while their use is widespread, their capabilities pale in comparison to other materials. For instance, graphene and borophene have much greater potential in not only the aforementioned fields but also medicine, optics, energy, and more. Graphene and borophene truly demonstrate what a great nanomaterial’s capabilities should look like.

Functionality

It’s one thing to have the capability for greatness, but it’s another thing to be able to carry it out. This is where nanomaterials are put to the test, to see if they can integrate well into our technology and products in order to improve them. In most cases, nanotech and other advancements won’t be comprised solely of the specific nanomaterial, meaning that making sure it functions properly alongside other materials and in composite forms is essential.

This is where nanomaterials begin to have trade-offs. Titanium dioxide, zinc oxide and silicon dioxide are all utilized effectively and with ease, with very few issues related to function. However, stronger materials tend to have more unstable qualities: borophene in particular is susceptible to oxidation, meaning that the nanomaterial itself needs to be protected, making it difficult to handle. Graphene lacks a band gap, making it impossible to utilize its conductivity in electronics without some way to control it. Without overcoming such hurdles for implementation and functionality, these powerful nanomaterials will remain at arm’s length of greatness.

Scalability

Even once a nanomaterial is able to make the cut and perform well, the final hurdle they must pass is scalability and mass production. After all, part of the appeal of these technological advancements would be its widespread use. While nanomaterials like titanium dioxide, zinc oxide, and silicon dioxide have had no issue with production and subsequent use, their issues lie elsewhere.

There were long-standing questions about the scalability of graphene — particularly the costs involved — but there is now greater optimism on that front. A 2019 study predicted, in fact, that worldwide graphene sales would reach $4.8 billion by 2030, with a compound annual growth rate of 45 percent.

Borophene has likewise faced scalability issues, though in the case of that nanomaterial it has centered more on the production of mass quantities. (It was judged as a major breakthrough, for example, when Yale scientists produced a mere 100 square micrometers of the substance in 2018. Efforts continue on that front, however, and much was learned from scaling up graphene. So it would appear to be only a matter of time.

If and when scalability is achieved, it seems safe to say that the full potential of both nanomaterials can be explored. We know they are versatile, but at that point, we will truly find out how versatile they can be. 

Data Storage in 2021: What Lies Ahead

Data Storage in 2021: What Lies Ahead

The forecast for 2021 in data storage is continued cloudiness, with increased edginess and integration.

The cloud is everything when it comes to enterprise data storage and usage; fully 90 percent of businesses were on the cloud as of 2019, and 94 percent of workloads will be processed there in 2021. But that only begins to tell the tale. The coronavirus pandemic brought about an increased need for agility and interoperability between systems in 2020, and that promises to continue, and then some. 

With the pandemic raging on and remote work a necessity at many firms in the months ahead, we will see an accent on things like multi-cloud storage, all-flash storage and serverless storage, with an eye in the years ahead on edge computing. We will also see an ongoing need for integration tools like knowledge graphs and data fabrics. 

The reality, as Marketwatch reported, is that Big Data serves as the spine of Big Business. As a result it must always be strengthened, so that it may meet the ever-changing needs of a community that has faced massive disruption during the healthcare crisis, and which will be forced to adapt to the ongoing data explosion.

The total amount of data created, captured or copied in the world (called the DataSphere) stood at 18 zettabytes in 2018, and is expected to reach as many as 200 zettabytes by 2025 (up from previous estimates of 175). In 2020 alone, some 59 zettabytes are expected to fall into one of these three categories, with a sizable sliver in the enterprise realm. As a result, roughly $78 billion is expected to be spent on data storage units around the globe in 2021.

The Data Analytics Report noted that artificial intelligence has always had a considerable impact on enterprise data storage, and will continue to do so. In particular, related technologies like machine learning and deep learning enable companies to integrate data among various platforms.  

In addition, all-flash storage has become an appealing option, because of its high performance and increased affordability. Also coming into vogue is serverless computing, where a vendor provides the infrastructure but users are free to use it as they see fit. 

In addition, multi-cloud offerings, which allow for data management across various on-premise and off-premise systems, are in the offing. And it is with this storage method that knowledge graphs (data interfaces) and data fabric (the architecture that facilitates data management) come into play. 

Still ahead is a pivot toward edge computing, which allows for enhanced convergence with the cloud, and the distributed cloud, where services are operated by a public provider but divided between different physical locations. 

The point is, the evolution of enterprise data storage, a challenge accelerated by a health and economic crisis, is ongoing. That is unlikely to end any time soon, given the explosion of data — and new solutions, which we can only begin to contemplate, are certain to arise in the years ahead.

Researchers Are Borrowing Inspiration from the Human Body to Filter Sea Water  

Researchers Are Borrowing Inspiration from the Human Body to Filter Sea Water  

The global water crisis is fast becoming acute. Due to pollution and other environmental factors (not the least of which is global warming), some 1.1 billion people currently lack access to clean water, and another 2.7 billion face a shortage of one month or more every year.

Worse, the World Wildlife Foundation estimates that 67 percent of the planet’s population could be facing a water shortage by 2025. 

Clearly drastic steps are in order, and one possibility is the purification of seawater. While traditional methods of doing so are sorely inefficient, researchers have discovered a promising new method that may prove utterly revolutionary. 

The key? Mimicking the way that human bodies transport water within their cells. 

Mimicking the Functions of Aquaporins

The highly sustainable water filtration method is being researched and developed at the Cockrell School of Engineering, a branch of the University of Texas. At first, the research team was trying to mimic the functions of proteins called aquaporins, which are found in cell membranes. Aquaporins act as channels for the transfer of water within the cell and across cell membranes. The team developed a network of synthetic cell membranes that included synthetic protein structures very similar to genuine aquaporins.

The hope was to copy the way aquaporins in cell membranes transport water. Aquaporins fashion pores in the membranes of cells in organs of the body where water is needed the most. These organs include the eyes, lungs and kidneys. The team of researchers wanted to build upon this concept as a sustainable way of filtering water and removing the salt content from seawater, a process called desalination.

Better Than Expected

They were not, however, as successful at mimicking this process as planned. The individual membranes didn’t work well alone. However, when several of them were connected in a strand, they were more effective at transporting and filtering water than previously hoped. The research team has dubbed these membrane strands “water wires.” You could think of this chain of membranes as transporting water molecules as fast as electricity travels through a wire.

These membranes remove salt from water so effectively that they could be used to develop a desalination process to replace current methods, which are expensive and inefficient. This new method would make desalination 1,000 times more effective than traditional methods are. Water could be purified on a large scale faster than ever to meet the demands of the world’s growing population.

Implications of Water Unsustainability

Although water is this planet’s most plentiful resource, not that much of it is fresh water that can be used for drinking and farming. Only three percent of the earth’s water is freshwater, and as noted above, this has major implications for the rising global population, which stood at 7.8 billion as of October 2020, and is projected to near 9 billion by 2035.

China (1.4 billion) and India (1.3 billion) have the most people, while the U.S. has over 330 million.

In 2015 the United Nations established clean water as one of its 17 sustainable development goals, the specific aim of which is to ensure that everyone on the planet has access to clean water by 2030. That means taking such steps as reducing pollution, increasing recycling and using water more efficiently, among other steps.

Many, many other organizations are tackling this problem, as it is something that defies solution by a single body. It is critical that these organizations do so, given that water is essential to human life, essential to our very survival. 

How Blockchain is Transforming Gaming (and Vice Versa)

How Blockchain is Transforming Gaming (and Vice Versa)

In 2017, the game CryptoKitties made waves throughout niche gaming communities online. Axion Zen, the developer of the game, introduced the concept of unhackable assets. Players are able to purchase, sell, and breed kittens that are virtually uncompromisable.

CryptoKitties was built on Ethereum, the most popular decentralized platform for storing bitcoins and completing transactions. At one point, the game was so popular a single virtual kitten was sold for more than $100,000 and Ethereum experienced regular slowdowns.

Blockchain may just transform the gaming industry as we know it. 

For one, blockchain allows developers to create truly “closed ecosystem” gaming environments, which reduces the influence of so-called “gray trading,” thereby fostering a higher level of trust in the gaming space.

Moreover, gamers can buy assets in the game using their cryptocurrency directly, which makes the process of handling actual money more rapid and secure. In-game items can be immutably owned by users, which solves the untenable problem of digital theft and hacking.

Even more interestingly, blockchain could blur the lines between separate digital worlds. Due to the distributed nature of blockchain, developers could enable the ability to transfer items between distinct game universes, in essence enabling a sort of digital multiverse. The potential for game world fluidity could completely shift the insular way that game studios operate.

Though CryptoKitties is recognized as the world’s first blockchain-based game, developers have gone on to develop specialized frameworks exclusive to gaming. Many games use the dGoods protocol, which is a token-based platform similar to Ethereum.

In many ways, the blockchain and gaming partnership is critical, not necessarily for the future of gaming, but more so for the future of blockchain. Put simply, gaming is the first actual use case for widespread blockchain adoption.

“Gaming does not need blockchain. Blockchain needs gaming,” Josh Chapman, partner at the esports firm Konvey Ventures, stated. “Blockchain will only (see widespread) adoption and application once it provides significant value to the gaming ecosystem.”

From 2017 to 2018, blockchain investments saw a 280 percent increase. By 2019, companies were already actively investing gaming enterprises employing blockchain methods: A company dubbed Tron invested $100 million towards a special gaming fund for games producers interested in utilizing blockchain technology.

Widespread adoption in the gaming industry may just signal that the blockchain is here to stay, and play.

A More Environmentally Friendly Battery

A More Environmentally Friendly Battery

Though small in size, batteries play an outsized role in today’s society, powering everything from smartphones to electric cars. And as the world’s population continues to grow, so too will the need for batteries, which will likely lead to some thorny issues. Improper disposal of lithium-ion batteries can harm humans and wildlife alike, since those batteries leach off toxic chemicals and metals over time. In addition, today’s battery production process tends to be cost-ineffective.

Aluminum batteries can address both problems, as their material costs and environmental footprint are significantly smaller than those of traditional batteries.

While aluminum batteries have been around for some time, the current incarnation contains a carbon-based anthraquinone instead of the graphite-based cathode. With this new carbon-based method, the electrons are absorbed by the cathode as the energy is consumed. This helps increase energy density, one of the key reasons aluminum batteries are a more cost-effective and environmentally friendly solution than lithium-ion batteries, which have grown in popularity with the advent of electric cars.

And while such batteries can be recycled, it is “not yet a universally well-established practice,’’ as Linda L. Gaines of Argonne National Laboratory told Chemical & Engineering News. The recycling rate of lithium-ion batteries in the U.S. (and the European Union) is about five percent, according to that same outlet, and it is projected that there will be two million metric tons of such waste per year by 2030.

That is a potentially staggering problem, and a far greater one than that which is presented by traditional alkaline batteries, which were hazardous to the environment when they contained mercury but now only contain elements that naturally occur in the environment, like zinc, cadmium, manganese and copper. 

Aluminum batteries, then, are a safe alternative to lithium-ion batteries. They are also safer in another way, as they tend to exhibit lower flammability. The inertness property of the element and the ease of handling in ambient conditions also help in this regard.

Another benefit is that by gradually replacing lithium-ion batteries with their aluminum counterparts, lithium mining — a process that results in  toxic chemicals leaking into the environment, leading to large-scale contamination of air, water, and land — can be curtailed. Aluminum batteries are also a viable replacement for cobalt-based batteries, which carry with them their own environmental concerns.

Lithium mining also tends to use a ton of water, roughly 500,000 gallons to extract a ton of lithium. While mining bauxite, the raw material from which aluminum is refined, also involves the use of water and energy, the resource requirements are less.

That said, aluminum batteries are yet to be the perfect alternative to other less sustainable types of batteries being used today. Compared to lithium, they are only half as energy-dense. Scientists and researchers are looking for ways to improve the electrolyte mix and charging mechanisms. 

Overall though, aluminum is a significantly more practical charge-carrier when compared to lithium, due to its multivalent property. This means that every ion can be exchanged for three electrons, thus allowing up to three times greater energy density.

Can 3D Printing Help Solve the World’s Housing Crisis?

Can 3D Printing Help Solve the World’s Housing Crisis?

A 3D printer is being used to build a house in Italy, and while the method is not entirely new, the material being used in its construction is: locally sourced clay.

Mario Cucinella, head of his eponymous architectural firm based in Italy, designed this prototypical structure, which is being erected near Bologna and is the first 3D home completely comprised of natural materials. The house is not overly large, consisting of a living room, bedroom and bathroom, but it is recyclable and biodegradable. It is also one more hint that 3D printing can help address Earth’s staggering housing problem.

As of 2019, 150 million people around the globe — i.e., two percent of the world’s population — were homeless. And 1.6 billion people, or 20 percent of the world’s population, lack adequate housing. The shortage is so acute that fully addressing it will likely involve building 100,000 new houses every day for the next 15 years, according to the United Nations.

Building 3D dwellings could, at least, be part of the solution, in that they are inexpensive, sustainable and easy to erect. And Cucinella’s creation is the latest step in this housing trend, which has gathered momentum in recent years. The clay of which it is composed is extruded through a pipe and set by a 3D printer known as a Crane WASP, which according to artists’ renderings results in a layered, conical look. 

Construction began in the fall of 2019, and before the coronavirus pandemic was expected to be completed in early 2020.

Prior to this, concrete was the most commonly used material in 3D-printed buildings. That was true when a house was built in China in 2016, and when an office building was erected in Dubai that same year. It was true in Russia in 2017, the U.S. (specifically Texas) in 2018, Mexico in 2019 and the Czech Republic in 2020.

There were two exceptions. One was a rudimentary structure made of clay and straw in Italy in 2016, the other a tiny cabin made of bioplastic in The Netherlands in 2016. The latter was designed specifically to be used as a temporary dwelling in areas where natural disasters occur.

Certainly that’s possible with some of the other 3D dwellings as well. Those built in Russia and the U.S. were, for instance, erected in a single day. But the more common usage is expected to be as permanent dwellings for those living in areas where there is a dearth of suitable housing.

Consider the technology startup Icon and the housing nonprofit New Story, which built the aforementioned house in Texas. They are also the ones who in 2019 began construction on a village that will eventually consist of 50 such structures in Mexico, in an area of that nation where the median monthly income is $76.50. The idea is to offer these houses, which sell for $4,000 in developing countries, for $20 a month over seven years. The remaining cost will be covered by subsidies from New Story and private donations.

Icon and New Story are also planning villages elsewhere in Mexico, as well as in such nations as Haiti and El Salvador.

The world’s housing problem, daunting as it is, will not be solved by any single method, but 3D printing offers a partial solution.

 

The First Step in Graphene Air Filters

The First Step in Graphene Air Filters

Laser-induced graphene (LIG) air filters, developed in 2019 by a team at Rice University, have future implications for medical facilities, where patients face the constant threat posed by air-borne pathogens.

The coronavirus, which is spread by respiratory droplets, has offered sobering proof of the havoc such diseases can wreak, but it is far from the only risk these patients might face. In all, one of every 31 of them will contract an infection while hospitalized, a rate that is only expected to increase. Dr. James Tour, the chemist who headed the Rice team, noted that scientists have predicted that by 2050, 10 million people will die every year as a result of drug-resistant bacteria.

As Tour also told SciTechDaily, “The world has long needed some approach to mitigate the airborne transfer of pathogens and their related deleterious products. This LIG air filter could be an important piece in that defense.”

Tour and his team developed a self-sterilizing LIG filter that can trap pathogens and eradicate them with electrical pulses. Those pathogens (bacteria, fungi, spores, etc.) might be carried by droplets, aerosols or particulate matter, but testing showed that LIG — a porous, conductive graphene foam — was equal to the task of halting them. The electrical pulses then heated the LIG up to 662 degrees for an instant, destroying the pathogens.

The self-sterilizing feature could result in filters that in the estimation of Tour’s team last longer than those currently used in HVAC systems within hospitals and other medical facilities. They might also be used in commercial aircraft, he told Medical News Today.

Israel’s Ben-Gurion University of the Negev has taken LIG technology a step further and applied it to surgical masks. Because graphene is resistant to bacteria and viruses, such masks would provide the wearer with “a higher level of protection,” as the mask’s inventor, Dr. Chris Arnusch, told Medical Xpress.

Because of its antibacterial properties, graphene can be used in medical and everyday wearables. Manufacturers have also taken advantage of its strength, flexibility and conductivity to integrate it in a wide range of products, from touch screens to watches to light jackets.

As for LIG, it is a process discovered in Tour’s lab in 2014, and involves heating the surface of a polyimide sheet with a laser cutter to form thin carbon sheets. LIG also has many uses, whether in electronics, as a means of water filtration or in composites that can be used in building materials, automotive components, body armor, sports equipment or aerospace components

Arnusch, who serves as senior lecturer and researcher at the BGU Zuckerberg Institute for Water Research (a branch of the Jacob Blaustein Institutes for Desert Research), cited water filtration as the inspiration for his work on surgical masks.

But because COVID-19 has offered such a grim reminder about the dangers of airborne illness, the air filters have garnered as much attention as any graphene-related application at present. It is compelling evidence of just how versatile (and valuable) graphene can be, and of what it might mean for our safety, and our future.

How Do We Finally Clean Up Our Polluted Oceans? This Company May Have an Answer

How Do We Finally Clean Up Our Polluted Oceans? This Company May Have an Answer

The Great Pacific Garbage Patch (GPGP) — alternatively known as the Pacific trash vortex — is an area in the Pacific Ocean teeming with plastic pollution. It’s one of the largest accumulations of plastic among the world’s oceans, and studies indicate it continues to exponentially increase in size. Sadly, it’s only one of countless trash dumps throughout the world’s oceans, rivers and waterways.

Enter the Ocean Cleanup, a Netherlands-based nonprofit environmental protection organization determined to remove this debris with new technology. The effort calls for a mix of using innovative trash removal technology and a public awareness campaign to reduce waste. Here’s a look at this ambitious project to clean and protect the earth’s water supply.

What Is The Ocean Cleanup?

The Ocean Cleanup aims to remove plastic pollution in both oceans and rivers on a massive scale. A primary goal beyond the original purpose of cleaning oceans is to clean the 1,000 most polluting rivers, which account for an estimated 80 percent of plastic pollution in oceans, according to the Maritime Executive.

CEO Boyan Slat, who founded The Ocean Cleanup in 2013, believes the organization needs to create solutions to prevent plastic from entering water systems in the first place. The Ocean Cleanup announced a refined model of its floating device Interceptor in October 2019 to resolve both cleanup and prevention. So far the organization has installed four Interceptors in different ocean locations: Indonesia (Jakarta), Malaysia (Klang), Vietnam (Mekong Delta) and Dominican Republic (Santo Domingo). Additionally, Thailand and Los Angeles County are exploring similar possibilities.

The Ocean Cleanup is mainly funded by donations and sponsors such as Salesforce CEO Marc Benioff and PayPal co-founder Peter Thiel. A 2014 crowdfunding campaign generated over $2 million and by November 2019 the organization had raised over $35 million. Its first ocean cleanup system was deployed in September 2018. A more refined version called System 001/B a year later has shown to be successful at collecting debris.

Interceptor Technology

The Interceptor is a solar-powered boat designed to remove over 50,000 kg of trash per day. The device is equipped with lithium-ion batteries that store and provide 24/7 energy. It’s an eco-friendly machine that doesn’t create noise or exhaust, nor is it harmful to marine life. By anchoring to a river, this device — which doesn’t interfere with other vessels — is able to capture floating debris. The system is connected with a computer to monitor data on collection, energy performance and health of electronic components.

The key to the Interceptor collecting trash is the use of several conveyor belts that scoop up debris and place it in onboard dumpsters. The waste is then transported to a local waste management facility. Deploying a fleet of these automated trash collectors can remove an enormous amount of plastic in a short time. One Interceptor is capable of removing up to 110,000 pounds of plastic per day

Slat hopes to cut 90 percent of the plastic trash in our world’s oceans by 2040. The solution will involve mass producing the Interceptor to be used in different parts of the world, and scaling up the project with bigger fleets of up to 60 devices. 

A crucial area targeted for ocean cleanup is the North Pacific Subtropical Gyre between Hawaii and the continental United States. This area is about the size of Alaska, comprising about 79,000 tons of plastic pollution, including tiny fragments smaller than 5mm in length. Thankfully, after years of trial and error, a more streamlined Interceptor model was developed in 2019 that can hold both plastics and microplastics. With that advancement, Slat is confident his vision for mass cleanup is attainable.

How blockchain can help secure 3D printing

How blockchain can help secure 3D printing

3D printing is an advanced technology that many people in and outside of the tech world are already highly familiar with. In a nutshell, 3D printing is the process of using a 3D printer to create a computer-generated three-dimensional object. This is done by adding layer upon layer of material to an object, a process that is often referred to as additive manufacturing. As for as blockchain is concerned, it is an equally advanced yet much different type of technology. Blockchain is a system of records and transactions secured by advanced cryptography. By regulating 3D printing with blockchain it is thought that the technology would be far safer in the hands of the general public.

3D Printing’s Inherent Benefits and Risks

In the coming decades, 3D printing technology is guaranteed to revolutionize life as we know it. Anyone will be able to turn their garage into a micro-manufacturing facility. The healthcare industry will be able to produce replacement organs made out of real human tissue. Essentially, anyone will be able to create anything at any time. 

With the massive freedom that 3D printing will provide humanity, an equal number of risks will present themselves. Using 3D printers to easily produce powerful weapons, bombs, counterfeit items, and other malicious objects will be one of those risks. Governments and technology experts around the world are already looking for ways to preempt the risks posed by 3D printing technology. Using blockchain as a regulating system is being looked at as the primary solution.

Ways That Blockchain Can Help Secure 3D Printing

No Guns – 3D printers around the world have already been used to create homemade guns. Using 3D printers to produce projectile weapons is a major risk posed by the technology. Thankfully, blockchain regulated 3D printing devices could be hardcoded in a manner that would make it impossible to produce any type of weapon with the device. Blockchain could also immediately alert the authorities if someone did attempt to create an unauthorized weapon with the device.

Intellectual Property Protection – 3D printing is going to make it easier than ever before to infringe upon intellectual property rights. For example, a branded product that took millions of dollars and years to develop could be easily mass-produced in someone’s garage for the cost of raw materials using a 3D printing device. It is thought that blockchain will be able to prohibit 3D printer users from infringing upon the intellectual property rights of others.

Taxation Enforcement – 3D printing technology will make it easier than ever before to produce black market goods on an industrial scale. This would prevent taxation bodies from collecting taxes owed on any underground commercial endeavors. It is thought that blockchain will be able to keep 100 percent accurate records on goods produced in order to alert government officials to underground commercial activities.

Secure 3D Bioprinting – It is not a question of if, but a question of when will 3D bioprinting be used in mass to produce replacement organs and body parts for human beings. Once this technology is used on a wide scale it will need to be secured by an ironclad system. With blockchain’s near-impenetrable cryptography-based core, it is likely to be the go-to security used for all future bioprinting technology.

Preventing Counterfeits – Advanced 3D printing technology of the future will be able to easily produce counterfeit IDs, counterfeit money, counterfeit credit cards, and any other type of counterfeit object. By using blockchain regulators on the 3D printing device and upstream at the internet provider level, it will be very difficult for home-based users to create counterfeit goods without local police being swiftly alerted.

From CO2 to Coal: Turning Back the Clock

From CO2 to Coal: Turning Back the Clock

With climate change at the forefront of most environmental policies and initiatives, scientists and researchers are scrambling to figure out a way to address the problem. Let’s face it, the earth is getting warmer at an alarming rate. The current warming trajectory means that parts of the world will be faced with constant weather-borne catastrophes in the coming decades. Over the much longer term, global warming could cause countless animal extinctions and even threaten the very existence of mankind itself. Thankfully, several leading solutions are being developed that could rewind the CO2 emissions clock. While scrubbing carbon dioxide from the air might seem impossible, it is very close to becoming a reality.

Carbon Capture and Storage

Some of the world’s leading scientists have developed an anti-emissions technology that is being called carbon capture and storage. In a nutshell, the technology is able to draw carbon dioxide from the earth’s atmosphere and turn it into raw usable coal. Once the coal is created it is able to be safely stored in huge underground storage facilities that present no contamination risk to the greater environment. Thus far the carbon capture and storage technology has only been used on a small scale, however, the processes driving the technology are thought to be fully scalable to an industrial level.

How Does Carbon Capture and Storage Work?

An international group of researchers and scientists operating out of RMIT University in Australia have been able to create a fully functional electrocatalyst made out of liquid metal. This electrocatalyst is able to pull carbon dioxide out of the air and turn it into a solid coal-based matter through room temperature reactions. The solid carbon is then auto-transported to a large underground silo by the filtration system’s built-in solid carbon transport system. This technology scaled out has the potential to rewind the earth’s carbon clock.

How Scalable is It?

The carbon capture and storage technology was devised from the get-go with scalability in mind. The team of scientists that devised the technology claim that the technology is scalable on an industrial level. In fact, unless the technology is scaled out in mass on globally, it would only be able to remove a marginal amount of carbon dioxide from the atmosphere. Placing a large scale carbon capture and storage center in every major city on earth could help reduce carbon emissions by a double-digit percentage globally.

Could Carbon Capture and Storage Cure Global Warming?

Experts that have closely studied the carbon capture and storage technology believe that it could be a key component towards finding a cure for global warming. While the technology alone could not offer a solitary cure, if it is used in coordination with other initiatives it could likely help to dramatically reduce carbon dioxide in the atmosphere. The percentage of carbon dioxide that the technology is capable of realistically reducing could be a double-digit percentage if processing centers were widespread and built to industrial scale. These processing centers would also need to be strategically located in places where carbon emissions are the highest.

Does The Technology Pose Any Risks?

The carbon capture and storage technology does not pose any primary risks, as the process is clean and pollution-free. It does, however, present a serious secondary risk. Due to the fact that carbon capture and storage technology is able to mass-produce raw coal that would be usable as a fossil fuel energy, there would always be a temptation to use this coal to create dirty and non-green energy. Laws and regulations, however, could be put into place that prohibit coal produced by carbon capture and storage technology to be used to produce further greenhouse emissions.

Engineering “Super Coral”

Engineering “Super Coral”

In Townsville, Australia at the National Sea Simulator, researchers gather to watch the first bits of egg and sperm float away from the coral reefs they have been observing. Madeleine van Oppen, a coral geneticist, gathers her team ready of the spawning as one of the species of coral spawning more quickly than anticipated. She and her team had to move fast, as it is imperative to prevent it from crossbreeding with the other coral in the tank. Van Oppen and her team are attempting to create new breeds of coral that can withstand the intense marine heating that has killed over half of Australia’s Great Barrier Reef. Global temperatures around the world are rising and destroying the reef in every ocean. Australia has committed 300 million for research in efforts to preserve and restore coral reefs. In doing so, it has become a beacon for scientists devoted to reefs, primarily at the Sea Simulator.

The Australian Institute of Marine Science created the Simulator, in which dozens of tanks have been developed to replicate the conditions of today’s oceans as well as simulate projected future conditions. It is here that Van Oppen and her team are attempting to re-engineer the corals by any method that may be fruitful. They are using old methods such as domesticating the plants and trying out new technology like gene-editing tools. Following the examples of tech entrepreneurs for producing fast results, the team is quickly testing any new ideas and discarding those that hold no potential promise. Whole projects can be determined to kept or abandoned over a span of ten hours. The material essential to the core of this work, the genetic material of the coral is only released once a year. The scientists must move quickly to gather and test material or the egg will die with being fertilized by sperm, and they will have to wait another year for a further attempt.

Van Oppen and other scientists know they are working against a very crucial and unforgiving clock. Over the past decade, underwater heatwaves are decimating coral reefs in large numbers. By the time temperatures increase by 2 degrees Celsius, reefs will be gone from waters worldwide, and it is estimated to reach 3 degrees by the end of the century. The other threats facing the reef is the acidification of the oceans due to the pH being lowered by the absorption of Carbon Dioxide. The calcium carbonate shells of the corals and other marine life are vulnerable to the newly corrosive levels. Seven years ago, Oppen and Ruth Gates, a conservationist, and coral biologists began to wonder if there could possibly be away that could give coral some extra advantage to help them survive.

Coral Conservation thus far has focused on pollution, predators, fishers, and tourists, but not on something as radical as the duo had in mind. They advanced the idea of assisted evolution in a paper in 2015 from the National Academy of Sciences. In the same year, the charitable foundation of Paul Allen gave them funding for research. At this time, the research focuses on four main areas to help the coral: cross-breeding strains to hopefully create a heat resistant variant, using genetic engineering to alter coral and algae, rapidly evolving tougher strains by growing them quickly, and manipulating the microbiomes of the coral.

Introducing genetically engineered lifeforms into already existing ecosystems have raised many concerns in the scientific community. David Wachenfeld is chief scientist of the Great Barrier Reef Marine Park Authority compares the situation to cane toad incident in 1935. Then, the toads were introduced to Australia to combat sugarcane devouring beetles, but ended up showing no interest in the insects and wreaked havoc poisoning the surrounding wildlife. He fears that engineered coral could become predators to the existing reefs. In March, Van Oppen and her team were able to get permission to move the cross-breed hybrids to open ocean. It may be some time before any actual effects can be seen, but it is the only option the coral reefs have left for a chance to survive.

Blockchain Could Change The Way We Use Energy

Blockchain Could Change The Way We Use Energy

Blockchain is widely regarded to be the next major technological breakthrough for mankind. In the not so distant future, it is thought that blockchain will run everything from internet-based transactions to more complex government-run systems and core infrastructure. For anyone unfamiliar with blockchain, it is a completely transparent, highly encrypted, and decentralized means of storing information and digital records. With blockchain possessing fewer security risks than more standardized digital systems, it is slowly becoming the preferred method of managing energy distribution and consumption. When it comes to energy management and distribution, blockchain is far safer, far more transparent for consumers, and far more efficient in how it maintains infrastructure.

Setting Up Smart Grids

Numerous start-up ventures have recently made it possible for consumer-driven smart grids. In a nutshell, these grids grant consumers a far more transparent level of access and control over where they source their energy. This, in turn, creates far more demand for clean and green energy sources, spurring further innovation in these lucrative sectors. For example, consumers can use a blockchain to ensure that their sole source of energy comes from wind, thus ensuring that all of the money they spend on energy goes completely to upstream wind farms.

Less Energy Market Manipulation

It has recently come to light that many of the more traditional fossil fuel-driven energy companies have used their market share for many decades to suppress cleaner energy products. The coal industry, for example, has gone out of their way to make it much harder for consumers to get access to cleaner green energy providers. With blockchain forcing transparency on to the energy game, the energy market will be scrutinized by end-users much more so than has even been the case. Fossil fuel companies that abuse their market power will receive massive backlashes under a blockchain-driven system.

Less Energy Waste 

One of the primary benefits of using blockchain to manage energy infrastructure and consumption is that it operates far more efficiently than standard infrastructural systems. Routing energy through conventional means has been proven to be nearly 30 percent more wasteful than blockchain routing systems, due largely to increased blockchain energy routing controls. Standard energy routing systems often use age-old routing channels rather than more modern infrastructural channels. With greater control over how energy reaches the end-user and greater transparency over energy routes, far greater efficiency will be forced on to energy providers to clean up wasteful delivery systems.

Improved Energy Data Management

Another major incentive to use blockchain in energy use is that it offers increased data management. Due to blockchain’s increased core transparency, consumers will have greater access to current market energy prices, energy marginal costs, energy taxes, and energy law compliance factors. The non-blockchain energy conglomerates are known to heavily manipulate the data that is passed on to end-consumers. They also intentionally omit certain data sets in order to prevent consumers from knowing too much. With blockchain-driven energy systems, the age-old energy industry of hiding data will simply no longer be possible.

Solar Will Be Moving Fast

In parts of the world where blockchain-driven energy systems are already in place, solar power is the leading energy source that open grid end-users are consuming. Once blockchain energy systems go into effect in the Western energy markets, a similar effect is predicted. Solar energy is the clear front runner in the clean energy race. Once consumers are able to hand-select their energy sources through blockchain management, solar energy is expected to begin to work its way to not just a leading green energy source, but a leading energy source overall.

Which nanomaterials show promise for water filtration?

Which nanomaterials show promise for water filtration?

Water’s amazing potential unites every living thing. Safe, healthy water can bring health crises under control and make the literal difference between life and death. There are deeply serious challenges to providing safe water. Much of the Middle East, Africa and Asia have almost no infrastructure outside the bigger cities. Paved roads, electricity and running water are uncommon. How can modern engineering provide efficient, economical ways to deliver safe drinking water?

Worldwide, one in nine people face the lack of safe water. That’s over 844 million people. Over 30 percent of the world population is severely lacking in sanitation. A child dies of water-borne illness every two minutes. Clearly, these problems need to be addressed

Water filtration at the molecular level can reduce mortality rates for both infants and mothers while improving health, sanitation and medical care for everyone. In the past, substances like bleach, iodine and quinine were mixed into water supplies to affect the chemical structure of the water or simply to poison unwanted organisms. When done incorrectly, it’s the people who end up getting poisoned.

Filtering river and spring water at a high enough rate to service laundry, crops and village tanks has been difficult, and chemicals are simply diluted too quickly to be useful in these situations. Filtration systems that can pass a high rate of flow while economically ensuring a high-quality result are desperately needed.

There are two ways filtration can improve water quality. One is filtering out particulates and organisms by forcing the water through holes too small for foreign objects to pass through. The problem is, when you get small enough holes to filter out microscopic organisms like bacteria, the process can be slow and impractical.

The other way filtration can help is by treating water ionically, by way of metals and other chemicals that are bonded to the filter material. This sanitizes water and removes harmful chemicals by reacting at the molecular level to remove them from the water.

Nanomaterials are a branch of nanotechnology, which manipulates particles, materials and chemical reactions at the scale from one to 100 nanometers. Some nanomaterials are sheets that are only one atom thick. Unique surface, physical and chemical properties have been discovered with nanomaterials that could have far-reaching impact on safe water supplies.

Some advantages to filtering with nanomaterials instead of conventional filters are less pressure to move water through the filter and huge surface area because of the tiny particles involved. Combining nanomaterials on one filtration system, or using different materials during each different stage of water treatment can produce amazing results.

Membranes constructed of carbon nanotubes remove nearly every kind of water contaminant, including viruses, bacteria and organic contamination. Yet carbon nanotube filters can flow water at a far faster rate than conventional filters. Charging such membranes with silver nanoparticles, taking advantage of silver’s antiseptic properties can provide high-flow filters, highly effective in particle removal but also in sterilization.

Nanomaterial usage varies widely depending on application. Graphene, carbon tubes and borophene are being used for membranes. Antiseptic nanoparticles like silver can be combined with the membrane and free metal nanoparticles such as iron and zinc are added to the water like powder and then removed with impurities attached to them.

Since the first research in the 1980s, nanomaterials have come a vast distance. Today they are poised to solve some of the world’s most serious health problems while simultaneously creating entire new industries. Nothing makes more sense than the application of such wondrous technology to the enormous global challenges of safe water supplies.

Creating a Fully Sustainable Plastic – Is It Possible?

Creating a Fully Sustainable Plastic – Is It Possible?

As recent demonstrations and political statements across the globe show, the environmental fate of our planet is seen to be at stake. One of the main contributors to these problems world-wide are plastics and the fact that these materials do not break down over time – and are accumulating at a record pace in landfills and oceans. Innovations in manufacturing and changes in consumer usage mean that fully sustainable plastics are being developed and that these products may soon be part of our everyday lives.

Sustainable manufacturing innovation is creating a rise in climate-friendly plastics as issues related to global warming, marine trash and awareness of the overall harmful nature of mass produced plastics has grown. Bioplastics, biodegradable plastics and recycling already existing plastics can lower greenhouse gas emissions and hydrogen produced from harnessing energy from air and water can be used to manufacture plastics in a different way.

Bioplastics are made from materials that break down, such as sugarcane and corn starch. Biodegradable plastics are those that can serve the same purpose as regular plastics, but break down quickly after being used. One new process takes marine microorganisms to make products that completely recycle into organic waste. Other research is focused on making products  such as home-compostable packaging and edible plastic wrap.

One of the best ways to create sustainable plastics is to use recycled plastic rather than “virgin” plastic to make products. As the world’s plastic is estimated to exceed 12 billion tons by 2050, this industry taps into the huge expanse of plastic already in landfills and the ocean. Also, sInce only 14% of plastic is currently recycled, this is a huge untapped resource. Uses for these reused plastics already include furniture, fashion and ceramics.

Researchers have also found ways to break down and reuse plastics that were formerly thought to be too difficult or expensive to deal with. Products such as styrofoam cups, plastic shopping bags and drink pouches are being transformed into reusable materials. Another way to deal with these “unrecyclable” plastics is to continue to push companies to create design-for-disposal, meaning plastic products that are purposely made to be recycle-friendly.

The corporate commitment to sustainable plastics also ensures that the movement will continue to gain momentum. A number of large companies have said they will have 100% reusable, recyclable or compostable by 2025 or earlier. These include Pepsi, Unilever and Walmart. Consumers have a huge role to play as well. We can stop using plastic shopping bags, one time use plastic straws, and continue to buy from and promote companies who are innovative in their desire to keep their work from harming the planet.

It’s unlikely we can ever have a plastic-free future. There are legitimate and necessary uses for these materials that contribute to our overall health and wellbeing. However, if we are sometimes willing to pay a little more, prepare ahead for shopping trips and suffer some small inconveniences, we can make a huge impact on plastics manufacturing, and ultimately, our planet’s future.

Can we cure death? Should we?

Can we cure death? Should we?

In the Netflix series Altered Carbon, people live forever thanks to a device embedded in their neck, which allows them to transfer their consciousness from one body to the next, an idea that has already been discussed in the current conversation regarding the cure for death. While it’s unlikely that any future technology will take this form (barring a major breakthrough), there is a lot of momentum in the scientific community around overcoming death.

The world’s foremost futurists seem to be fascinated with the prospect of extending human life. Those seeking the cure for death include Larry Ellison, co-founder of software corporation Oracle who donates hundreds of thousands of dollars every year to life-extension therapies, and Peter Thiel, the co-founder of Paypal, who has donated millions of dollars to anti-aging research. In addition, Google parent company Alphabet operates a secretive, billion-dollar effort to cure aging through biotech company Calico, founded by CEO of Google Ventures Bill Maris.

One of the most well-known death cures of today is based in cryogenics. The concept entails freezing recently deceased individuals in a vat of liquid nitrogen in order to preserve their bodies. The hope is that if a cure for death is found at some point in the future, the individuals will be thawed and properly revived. There are over 300 people currently cryogenically frozen across the world, while 2000 more have signed up for the process, awaiting the moment they pass.

While cryogenics and cryonics deal with preserving and eventually reviving deceased individuals, most futurists and technologists discuss the cure for death with an important distinction. One of the primary symptoms of aging, which causes death, is the presence of senescent cells, a degenerative state that spreads to nearby cells. This is what causes the functional decay and eventual failure of organs as we age. Thus, most of the efforts in this field result in attempting to find a cure for aging.

Even for the healthiest individuals, there’s not much that can be done to mitigate the effects of senescence, which is why most modern efforts to cure death focus on preventing the natural decay of our bodies. After all, what good would immortality be if our bodies simply continued to deteriorate?

But even if the cure for death might be just around the corner, it’s going to be a long time before the average person will be able to experience this life-changing advancement. For example, being cryogenically frozen can cost anywhere from $28,000 to $200,000. And of course, there are the potential issues that could arise from curing death. Would the world become overpopulated? Would life have any meaning if we did not have death?

The implications of curing death remain dubious, but the existence of the notion is in itself a testament to humanity’s boundless ingenuity and tenacity for living. Besides, think of how much a person could accomplish in they managed to stretch their lifespan to hundreds of years. But before we reserve judgment on such a feat, we must first discover whether or not we will be capable of finding a cure for death.

To burn or not to burn: carbon capture versus green energy

To burn or not to burn: carbon capture versus green energy

Renewable energy is at the forefront of the public consciousness, and its implementation has been noticeably increasing in recent years. But renewable energies aren’t the only way to mitigate carbon release into the atmosphere: carbon capture and storage (CCS) has been proposed as an alternative method for decreasing emissions, particularly for the power and industrial sectors. What benefits does carbon capture bring? How does it hold up compared to green energy sources?

CCS differs from green energy in that it aims to capture excess carbon dioxide emissions, typically from areas such as biomass or fossil fuel power plants. The collected gas is then transported and securely stored, typically somewhere underground where it won’t enter the atmosphere. The bulk of existing carbon capture is focused on the power and industrial sectors, and CCS systems could reduce emissions from conventional industrial facilities and power plants by approximately 90 percent. It is also predicted to contribute 7 percent of total emission reductions by 2040.

One of the standout benefits of CCS is that the collected and stored CO2 can then be reused elsewhere. That carbon dioxide can be utilized in a wide range of fields, such as enhanced oil recovery, beverage carbonation, food processing and packaging, pharmaceutical processes, horticulture, and many more. Carbon capture’s ability to make use of the gas that would otherwise have negative effects sets it apart from clean energy.

However, there are those who claim that renewables are the only way to go. One of the primary arguments against it is the potential for leakage over time. Based on the massive amounts of emissions that would be stored through CCS, even low leakage rates can undermine overall emission reductions. There are also claims that it is ineffective compared to renewable energies, which as a whole have the potential to reduce total carbon emissions by more than 70 percent by 2050.

Green energy, which consists of electricity generated from hydro, wind, solar, and geothermal power, has been the predominant method for reducing carbon emissions. They are efficient and extremely cheap to maintain. But green energy implementation is not perfect, and there may just be some cases where renewable energy’s shortcomings come to light.

Renewable sources typically have higher upfront costs, which can be a major detriment for those looking to implement clean energy for their businesses or residences. In addition, storage options with large capacities are still being developed, and until then clean energy may be out of reach for large-scale endeavors. In addition, geography is an essential part of green energy, as some forms may be more beneficial over others in particular terrains. There may even be some cases where implementing green energy could prove difficult. These areas might just be ripe for carbon capture to pick up the slack.

In the end, neither carbon capture or green energy stands as the single solution to our massive greenhouse gas problem. If anything, we could use all the help we can get. If we are to meet the demands for clean energy and a healthier environment, using both carbon capture and green energy to mitigate emissions will greatly aid in the fight to save our planet.

The Weaknesses of 3D Printing are also its Strengths

The Weaknesses of 3D Printing are also its Strengths

Imagine entering an operating room for major surgery and knowing that your team of physicians has already practiced the procedure on a replica of your body. Before you go under anesthesia, you can feel confident that your surgery will be shorter, your recovery time simpler, your risks of complications fewer, and your bill cheaper. With 3D printing entering the medical and healthcare field, this is more possible than ever before.

In the past several years, the idea of 3D printing has entered the mainstream as an exciting and futuristic revolution to manufacturing. Unlike traditional printers with ink and paper, 3D printing cuts and shapes various materials such as aluminum, wood, foams, steel, glass, and plastic to create the objects we use daily. Pressing “print” on a new pair of eyeglasses, a car, or furniture sounds like a dream come true. With the many types of 3D printers, this soon may be a reality for the average person.

Even more impressive is the way 3D printing can advance modern medicine. For example, prosthetic limbs from 3D printers can change lives for amputees, or printing replicas of patients’ actual body parts would allow doctors to be familiar with the patient before entering the operating room. 3D printing’s potential is great, but the technology is not without challenges. Before it becomes mainstream in medical settings, there are many issues to solve.

Due to detailed design requirements, 3D printing is time-consuming and expensive. A decade ago when 3D printing was beginning to emerge, each printer costed up to hundreds of thousands of dollars. Even with years of advancements since then, 3D printing is still slow enough to prohibit its use en masse; and while 3D printing has gotten cheaper, it still remains a fairly expensive investment. Therefore, they are typically only used for very complex cases or in organizations with large budgets.

The appeal of 3D printing in medicine is its ability to cater to individuals specifically rather than just the average person, but unfortunately, this makes the process longer and more complicated. The red tape, cost, and technical requirements can make getting 3D models into a physician’s hands a long process. These raise ethical concerns about healthcare access, safety, and capacity in regards to 3D printing’s current usage in the medical and healthcare field.

Despite the downfalls, optimism should be strong for 3D printing, particularly in medicine. 3D printing’s weaknesses are frustrating, but in actuality, its weaknesses are also its strengths. 3D printing requires exact design requirements to make prostheses and replacement organs, and the slow, laborious process of 3D printing lends itself well to the complexity of creating body parts. The capabilities for producing replicas of a patient’s exact anatomy reduces the risk of mistakes during procedures. Even surgical training tools can be made more tailored for specific procedures when created, albeit painstakingly, with a 3D printer.

Organizations like Northwell Health are leading the charge for 3D printing body parts. Allowing surgeons to use these for practice could mean a safer and more efficient surgery for both the hospital and the patient. 3D printing can viably help treat tumors, deformities, amputations, and many more conditions, proving that the use of 3D-printed objects can dramatically improve a patient’s healthcare experience from start to finish.

As 3D printing advances, it is sure to become more efficient and cost-effective. The more we invest in it now, the sooner modern medicine can reap the rewards.

Will We Ever Be Able To Reuse All The Plastic We Produce?

Water Bottle Don Basile

 

Plastic has become an integral part of daily life, being the main component for devices, objects, and gadgets that humans use every day all over the planet. It was accidentally discovered in 1907 by scientist Leo Hendrik Baekeland, when he combined formaldehyde with phenol as an experiment. When he heated them, he discovered a new substance: plastic. He couldn’t know then how this new material would change the world, for better and for worse.

Though plastic has been a positive contribution to engineering, it has an infamous dark side that humanity faces. The material decomposes extremely slowly and doesn’t mix naturally with other waste. According to the Great Britain Royal Statistical Society, about 91% of plastic waste does not ever get recycled. Only about 12% of plastic waste is incinerated, leaving about 79% left to be part of landfills or litter.

The UN reports that the world produces about 300 million tons of plastic per year, which is almost the weight of the world’s human population. That means we do not recycle most of the plastic we create. About 25% of the plastic we use ends up in our rivers, lakes, and oceans. Marine life then ends up consuming the plastic we throw away, which hurts both them and humans who eat seafood. Studies predict that by 2050, there will be more trash in the oceans than fish.

With these less than uplifting facts, how can we combat the problem and protect our planet for the future? Entrepreneurs are taking matters into their own hands by creating products that help reduce plastic pollution and make recycling it easier. TerraCycle, based in Trenton, New Jersey, has created shampoo bottles made from plastic collected from waterways and beaches.  Precious Plastic is working to build machines that convert plastic trash into new usable materials. The Amsterdam-based Perpetual Plastic Project repurposes plastic waste into objects from 3D printing. The Ocean Cleanup, also based in the Netherlands, is working on an innovative way of extracting massive amounts of plastic trash from the oceans.

Entrepreneurship is a helpful addition to solving the plastic problem. What’s also necessary is individual responsibility and government legislation. Humans can help to recycle, reduce their reliance on plastic products. Many states in the US are already implementing new laws that reduce the use of plastic straws and utensils in restaurants, ban plastic bags in grocery stores, encourage the use of reusable water bottles, and help curb waste in general. Many businesses and large corporations are banning or curbing single-use plastic items. Carlsberg Beer has banned plastic can rings.

With the depressing statistics on plastic, the best way to turn it around is to take action individually and collectively as a society. As individuals, we can do our part in helping to reduce our reliance on plastic for everyday life. Advocating for change on local, state, national, and international levels will help our planet for future generations.

Will Ocean Cleanup Work?

Ocean Cleanup

Courtesy of Ocean Cleanup

 

 

 

 

 

 

 

 

 

 

We’ve all seen images of the veritable plastic islands floating in our oceans and on our shores. By 2050, Earth’s oceans could well have more plastic than fish. But an ambitious ocean cleanup project aims to change all that. The question everyone is asking is: will it work?

The Ocean Cleanup, a non-profit founded by Boyan Slat in 2013, has a singular focus: to rid the world’s oceans of plastic and waste. For the past five years, the organization has been researching and developing prototypes for ocean cleanup, and in 2017, they landed on the design that would become System 001, a boom-like mechanism created to efficiently remove large amounts of plastic from our oceans. Their first mission is to clean up the Great Pacific Garbage Patch, an oceanic gyre containing an estimated 80,000 metric tons of plastic.

The boom is made up of 2,000 feet of plastic piping, with a 9-foot skirt below. It utilizes natural oceanic forces for movement, combining energy from the waves, wind, and the current. By converting into a U-shape, System 001 corrals plastic near the surface of the water. The rig moves faster than the plastic, which is only affected by the current, allowing for effective capture. The plastic is picked up by a vessel and taken to be processed and recycled.

The system, nicknamed Wilson, is made to avoid harming sea life in the process. Because the skirt below is solid, the current will naturally push fish underneath and past the machine. It moves slow enough for creatures to swim away, so they don’t get trapped against the screen. The solid screen means sea life won’t get tangled like they might with a net. Wilson avoids other ships too: It is equipped with lanterns, radar reflectors, navigational signals, GPS and anti-collision beacons to ensure that the system won’t have any issue dealing with other ships at sea.

Engineered to withstand various ocean forces, the key to Wilson’s resilience is flexibility. The boom is constructed to be limber and follow the waves, wind, and current. The free-floating nature of the mechanism gives it a surprising amount of survivability when facing difficult conditions.

After many trials and tests, The Ocean Cleanup received the go-ahead to send their rig to the Great Pacific Garbage Patch in early October. The system arrived and was installed at the patch on October 16, and cleanup is currently in process.

If the initial voyage is a success, the organization plans to launch a whole fleet of cleanup booms. The Ocean Cleanup projects that it will be able to clean 50% of the Great Pacific Garbage Patch in five years, and that by 2040, they will have cleaned 90% of the plastic and waste in the ocean.

The non-profit has faced a number of criticisms regarding the viability and effectiveness of the project, but Slat and The Ocean Cleanup have faced these comments head-on, offering levelheaded explanations or solutions in response to naysayers. Despite doubts, a large-scale cleanup project like this is unprecedented, and the organization’s first venture looks set to succeed. Time will tell just how much of an impact The Ocean Cleanup and System 001 will have on our oceans.

Pondering the Many Possibilities of Graphene/Nano

Pondering the Many Possibilities of Graphene/Nano

Graphene almost sounds like the stuff of science fiction, or at least a dead-of-night infomercial — too good to be true, too amazing to be believed.

It has been called a “supermaterial,” in that it is composed of a carbon layer just one atom thick, making it the world’s only two-dimensional material. Yet it is quite strong. It is also flexible yet transparent, conductive yet impermeable (except for water).

As a result, there are all manner of potential uses for graphene (as well as nanotubes, a rolled cylindrical version of the substance). They include such areas as energy creation, information technology, and sustainability.

And here’s the thing — research into graphene is still in its infancy. While there is evidence it was used as far back as the Neolithic Era (and while it was studied in the 1940s), it wasn’t really discovered, per se, until physicists Andre Geim and Konstantin Novosolev did so in 2004.

What this means is, there are likely possibilities for graphene that have yet to be considered, things that go well beyond the research that has been conducted since Geim made his initial discovery.

Geim, born in Russia to German parents, was conducting experiments at the University of Manchester in the early 2000s. One involved the possibility of reducing a graphite block to a layer that was between 10 and 100 layers thick. One of his students attempted to do so and managed to come up with a fleck of graphite that was some 1,000 layers thick.

Then Geim gave it a try, using Scotch tape. He managed to peel off a layer, and by repeatedly using the tape came up with layers that were progressively smaller. Eventually, he whittled the graphite down until it was 10 layers thick. Further refinements led to the first graphite sheets, in which the atoms are arranged in a hexagonal pattern.

That advance resulted in Geim and Novosolev winning the 2010 Nobel Prize in physics, though it is everyone else who will truly reap the benefits of their work.

Here are some of the many possibilities for graphene/nano, as noted in a digitaltrends.com report:

  • Solar Power: Silicon is most often used in solar cells, but research indicates that graphene could be far more efficient — that while silicon releases a single electron when hit by sunlight, graphene would release several. But again, there is still much to learn on this front.
  • Semi-Conductor: A Department of Energy study showed what has been suspected — that graphene can help semiconductors operate more efficiently. Specific to the study, semiconductive polymers conducted electricity more quickly when placed atop a layer of graphene. The caveat is that the flow of electricity through graphene cannot be interrupted, though advances are being made in that area.
  • Water Filtration: Water is the only gas or liquid capable of permeating graphene, giving it the potential to filter out toxins, or desalinate seawater — something especially critical given the looming worldwide water crisis.
  • Flexible Electronics: The durability of such things as smartphones and tablets could be greatly improved by graphene. No longer would there be worries about damaging a phone that is in your pocket, when you bend over or exercise.
  • Biomedical Research: It is feasible that sensors or small machines made of graphene could be inserted into the human body, to examine different areas or deliver medication to a desired location.

Another report suggested that graphene could also be used in transparent screens, camera sensors or in material strengthening and even DNA sequencing.

Again, there is a long way to go before graphene’s full potential is realized. But the sky appears to be the limit.

How Brain-Machine Interfaces Could Combat AI

How Brain-Machine Interfaces Could Combat AI

The debate as to whether or not the human brain is computable has been a topic of discussion among scientists and technologists for quite some time. Leading scientists, such as Miguel Nicolelis, argue against such possibilities. Nicolelis, Duke school of Medicine Distinguished Professor of Neuroscience and founder of the Walk Again Project, insists that computers will never replicate the human brain. In his opinion, replicating human consciousness is impossible because vital features of consciousness are the result of unpredictable, nonlinear interactions among billions of cells.

As more and more artificial intelligence technologies are being built by top companies such as IBM and Google, however, some worry he’ll be proven wrong. Renowned physicist, Stephen Hawking has said that artificial intelligence “could spell the end of the human race.” Inventor and engineer, Elon Musk, has said that artificial intelligence “is the most existential threat that we face” and that “with artificial intelligence, we are summoning the demon.” Scientists are worried they could lose control to a powerful enough AI, creating a threat to mankind.

Although Nicolelis believes that the human brain is non-computable, he knows first-hand that computers have the brainlike ability to interpret the neural activity and perform actions on its behalf. This is called brain-machine interface technology, which differs from artificial intelligence in that it supplements thoughts already made by the human brain and carries out actions. Alternatively, artificial intelligence reproduces human cognition and functions independently from the human brain. Nicolelis is currently working to create brain-machine interface technology to aid in therapy for those suffering from spinal cord injuries.

While futurists express concern about how powerful tech could harm us, technologies like this prove that such innovations can be an enormous help to humans that are struggling. They also present a unique opportunity for humans to mitigate the threat of AI by becoming more computer-like ourselves.

Miguel Nicolelis’ Walk Again Project has developed a brain-machine interface that interprets what the user wants based on their brain activity and turns that activity into commands that work to perform an action, such as moving an arm up or stepping a foot forward. This technology uses an electroencephalography (EEG) cap instead of an implanted electrode to read a patient’s brain activity. Nicolelis and his team at Duke University used this technology in conjunction with an exoskeleton suit that moves usings hydraulic pumps and brainpower. The first ceremonial kick at the 2014 World Cup was made by a paralyzed man, using this technology.

Patients who suffer from paralysis as a result of spinal cord injuries or neurodegenerative diseases are often robbed of their ability to effectively communicate with those around them. In an effort to improve their quality of life, the BrainGate team has developed a brain-machine interface. By implanting a tiny, neural prostheses directly into the brain, patients with spinal cord injuries can search the Internet, a simple task that was impossible before the introduction of this technology. A neural prostheses works by detecting neural signals associated with intent, which can be decoded by advanced algorithms. The patient’s brainwaves essentially control where to “tap” on the screen of a tablet. Over time, scientists will work to perfect this form of brain-machine interface so that patients with paralysis from spinal cord injury can enjoy greater functionality.

If brain-machine interfaces can lend brain power to those who are not able-bodied, it stands to reason that they can augment the neural abilities of, well, anyone. It should come as no surprise, then, that the U.S. military is also researching and developing ways to use brain-machine interface technology. The Silent Talk Helmet is being produced as an initiative of the Defense Advanced Research Projects Agency (DARPA) and is funded by the U.S. government. Using brain-machine interface technology, the helmet will allow soldiers to communicate with one another silently using their thoughts. The technology will detect an individual’s word-specific neural impulses, which will be analyzed and delivered to the soldier on the receiving end. While the Silent Talk Helmet technology is still in it’s early stages, DARPA is proposing another brain-machine interface technology in the form of binoculars. These will help detect targets and increase the field of view for the soldier. S

Such brain-machine interfaces will help soldiers perform at their peak level. If you ever had to compete with an AI, these abilities would certainly lend you an edge. According to Elon Musk, a “neural lace” that allows humans to communicate with computers could help society to “achieve a symbiosis between human and machine intelligence, and maybe solve[s] the control problem and the usefulness problem.” In other words, it would balance the playing field and potentially halt some job automation.

While many of these human-brain interfaces are still in their early stages, it won’t be long before neurologists and scientists perfect these technologies so that they can be used long-term — Musk is already working on a solution that would make us all more cyborg-like than we already are. The potential for even more technologies to be developed, which will assist and lend power to humans, is huge. Still, the more advanced brain-machine intelligence gets, the more credibility is lent to the fears of AI skeptics. If a computer can express what the brain is thinking, how long before it can imitate it? Are brain-machine interfaces enough to compete with AI, if the brain turns out to be computable after all?  

Only time will tell, but for now, I believe this technology gives us more reason for hope than terror.

High-Tech Innovations Making “Black Mirror” A Reality

High-Tech Innovations Making “Black Mirror” A Reality

‘Black Mirror,’ the Twilight Zone-esque series in its third season on Netflix, features suspenseful one-off episodes that examine the darker side of modern technology. The sci-fi drama is uncomfortable—and at times, terrifying—precisely because of how plausible the scenarios are, whether set today or in the distant future. The show earns its name by mirroring reality in a bleak, black way.

Some episodes of ‘Black Mirror’ serve as cautionary tales about technology, but it’s rarely the tech itself that is the villain. More than anything, ‘Black Mirror’ is brilliant at uncovering the chilling situations innovation makes possible, turning them into fresh narratives for the speculative fiction genre.

Many episodes are centered around products, services or technologies that exist today, whether in a semi-advanced state or in infancy. Here’s a look at some that are closer to reality than you might think.

Capturing and replaying memories

In the first season’s third episode, ‘The Entire History of You,’ people have implants that allow them to record their experiences and replay them at will. This ability allows the characters to obsessively replay bits and pieces of their history—in this episode, to examine a broken relationship and find evidence of a partner’s cheating.

Today, our lives are more recorded than ever; our digital footprint includes not just photos, but everything we say, do and search online; what our virtual assistants hear; and footage from every street or building with a camera. Most of us rely on our phones to capture and relive moments, and high-tech eyewear is right around the corner. Though Google Glass’ first iteration was a bust, products like Snap Inc Spectacles promise to “make memories from your perspective.” If you think that’s excessive, Sony’s patent on a contact lens camera could subtract the nerdy head gear from the equation.

Human batteries

In “Fifteen Million Merits,” people exercise on bikes to power their surroundings and earn “Merits.” Characters have to pay to skip advertisements during everyday activities, and often watch reality shows to distract them from their inane existence.

Apps like Pact already gamify your exercise routine by helping you earn (or lose) money depending on your activity level. As for human batteries, if you’ve seen this concept of a floating gym powered by human workouts in Paris, you know that it’s possible in theory. A stamp-size device has even been invented to power mobile devices using the wearer’s daily movements.

The social credit system

In the first episode of the third season, “Nosedive,” a woman struggles to get to a friend’s wedding as her personal rating goes down and bars her from certain privileges. This touches on a very real reliance many have on online ratings and social media validation.

The Chinese government has already developed a social credit system that generates scores based on day-to-day behavior. Stateside, an app called Peeple touted as the Yelp for people received fervent backlash prior to launch. Lulu, an app that rated men, faced outrage for a similar reason and changed into a conventional dating app as a result. Even in the face of distaste, it’s impossible to deny how much sway these systems have on personal and professional status: for example, networking events that are limited to attendees with 500K followers or more

Preserving your identity with AI

In the episode “Be Right Back,” a young widow enlists a service that uses online data and social media to recreate her dead husband in body and mind.

As robots get closer to human every day, the AI component is improving as well. ETER9, still in beta, exists expressly to create AI extensions of people capable of posting, liking, and commenting online even after you are gone. Similarly, Eternime wants to preserve people as digital avatars that their descendants can ask questions beyond the grave.

A Russian coder named Eugenia Kuyda debuted a similar project for a deceased friend, using over 8,000 lines of donated text to create a messaging app that his friends and family could interact with.

Advanced AR and VR

In “PlayTest,” an American traveler named Cooper is paid to test an augmented reality game for a prestigious, secretive gaming company.

The first game he tests is an AR whack-a-mole game; the next is a 4D horror experience that generates a mixed reality based on the player’s greatest fears (or so the audience thinks). The game embeds an implant at the tip of the spine to function.

With the help of AR headsets like the Hololens, the whack-a-mole game is an easy reality. As for the next game, 4D VR is also underway: a full body suit called Skinterface can let players feel what happens during a VR game, like Cooper did in the episode, though not as painfully. And while there are no thought-gathering implants at the moment, the game Nevermind collects physical and emotional feedback to alter its course, not unlike the game in “Playtest.” 

It looks like we can look forward to the technologies that make ‘Black Mirror’ so creepy as much as we can the promise of Season 4. 

Is Startup “WayUp”  the Netflix for Jobs?

Is Startup “WayUp” the Netflix for Jobs?

There are a few reasons the streaming service Netflix became ubiquitous and disruptive so quickly: namely, the technology and user experience. Netflix uses smart algorithms that offer suggestions based on your viewing history, and make finding shows and movies easy and enjoyable.

It’s no surprise that startups look to Netflix as an example of successful disruption. Finding the right movie isn’t like pulling teeth anymore, thanks to the Netflix model. But the ease and accuracy Netflix technology implements aren’t limited to the entertainment industry; far from it.

One example of the “Netflix model” being reimagined applies to WayUp, a startup that wants to make job-searching more like browsing for a new comedy series than slowly dying inside. Since finding a job can be a painful and frustrating experience for applicants and recruiters alike, WayUp hopes they can turn the process on its head.

How? According to an article in Fortune, WayUp’s smart platform uses data to match applicants with open positions, much like Netflix does with its suggestion algorithms. The company takes 40 critical data points for every applicant, and claims to use them to match applicants and jobs with more accuracy than competitors like Indeed, Monster, and LinkedIn.

Led by Trinity Ventures and its existing pool of investors, WayUp has raised $18.5 million in series B funding round. The company, founded in 2014, has raised $27.5 million in total so far.

It seems that WayUp is already seeing moderate success with its 3.5 million users spanning 5,300 US campuses and 300,000 employers. Companies using WayUp to recruit include major corporate players like Google and Starbucks. By integrating newer and more advanced machine learning into its service, WayUp hopes it will provide millennials with the personalization and accuracy the traditional job-searching experience is so void of.

The idea is fairly simple: users of WayUp get matched only with the jobs they are qualified for, sparing recruiters and job-hunters wasted effort and bad matches. The app also shows users targeted content that may help them get the jobs they’re interested in.

All of this may sound great in theory, but as with any new service, theories don’t hold sway all on their own without the evidence to support them. Luckily for WayUp, they’ve got that too: according to its CEO Liz Wessel, one in three of people who use the service to apply to jobs get hired. In a world where millennials are struggling to find jobs, those aren’t bad odds.

Let There Be Light: How AR Can Help the Visually Impaired

Let There Be Light: How AR Can Help the Visually Impaired

Augmented Reality, also known as AR or simply “Pokemon Go” to some, is poised to dramatically change life as we know it. But, as has become obvious, there is so much more to AR than catching virtual creatures. In fact, for some it could mean the difference between vision and darkness.

A startup called OxSight has developed a product that helps the visually impaired recognize and navigate objects in their vicinity. The product would, if commercialized, be able to take the place of canes and seeing eye dogs so that wearers could localize objects near them. But unlike the canes and canines, this AR technology would amplify the small amount of sight blind folk have left enough to give them more freedom of movement.

As an example, it can be difficult for the visually impaired to differentiate what’s in the foreground from what is in the background. Augmented reality can add a sort of “highlight” or “aura” to objects in the foreground so wearers can have better depth of field. Depending on their needs, they can even customize the experience by zooming in and out or boosting colors.  Adjustments can be made via hand control.

Users can benefit as well from cartoon outlines of the figures in front of them, allowing them to differentiate between faces and recognize objects. This is especially helpful in low-light situations, like dark bars, or in interacting with loved ones whose faces they thought they may never see again.

With 70 million blind people in the world—about one percent of the population—the market for this technology is not exactly huge. OxSight may also face some challenges bringing their product to market because medical devices are highly regulated.

Even so, the implications of this technology are monumental. Imagine a world in which the blind could not only navigate their surroundings, via primitive objects like canes, but have what is left of their vision amplified. Now, imagine similar technologies being used to assist those with autism,  dyslexia, and dementia. It could change modern medical care as we know it.

Founder Dr. Stephen Hicks thinks all of this and more will be possible. The question is ‘when’ rather than ‘if.’

How Machine Learning is Changing Customer Service

How Machine Learning is Changing Customer Service

As machine learning gets more and more sophisticated with every passing year, it’s wise for people in all industries to begin paying attention to the ways it will and won’t be impactful. Customer service is just one of many departments that advances in machine learning will transform before we know it, for better or for worse.

According to an article on The Next Web, millennials are driving a shift toward “hyper-personal sophisticated experiences” in all areas of customer service. Fall behind, and companies will find their customers bored,  turned off, or even upset. That’s where machine learning comes into the equation: it can help them keep their finger on the pulse of customer needs, and adapt their services accordingly.

The new marketer, then, can’t just be a person: technology needs to be involved, and it needs to be smart.

“Instead of having to manually identify customer groups and which offers a valuable opportunity, these can now be automatically identified and prioritized using a combination of predictive analytics and machine learning technology,” author Graham Cooke writes for The Next Web. “This technology can then feed these opportunities back, listed according to which ones offer the largest untapped revenue opportunity.”

By measuring who wants what, when, and how they feel about it every step of the way, machine learning technologies offer companies what the article calls “empirical empathy.” This can be used to target specialized user experiences to the people that want it most, and generate revenue as a result.

This will transform not only customer service, but the entire face of digital commerce, Cooke predicts. In order to keep up, the threshold is high: businesses not only have to have great products to succeed, but a fundamental understanding of each segment of customer, and the ability to deliver meaningful experiences in an effort to both connect and drive sales.

This should be great for customers, who are actively driving this demand until it’s met. If companies can harness machine-learning to do so, the benefits are huge. The future is still uncertain, but one thing is clear: refusing to leverage new technology in customer service would be a big mistake. Huge.

5 Ways Graphene is Building Sci-Fi Tech IRL

5 Ways Graphene is Building Sci-Fi Tech IRL

A single atom layer of carbon isolated by an ingenious scientist armed with a piece of Scotch tape, that proves to be the thinnest, strongest and most flexible material ever created? You might be thinking that this sounds like something straight out of science fiction. But in an age where Jetsons-like technology is becoming more of a reality every day, one nanomaterial rises above the rest to bring an astonishing array of game changing applications to our lives.

We’re talking about graphene: an overachiever stronger than diamond, thinner than a sheet of paper, and more conductive than copper. It’s fitting, then, that the nanomaterial spurred over 25,000 patents for world-changing applications since its discovery in 2003. It even nabbed the Nobel Prize in Physics for its inventor, Sir Andre Geim. So why aren’t more people familiar with graphene?

While the science world was immediately awed by the limitless potential of graphene to change the world more than any material since plastic, the methods and costs of producing graphene on a large scale have proven to be quite a challenge. Fortunately, recent breakthroughs in production methods have reduced graphene production timelines and costs significantly—meaning the nanomaterial’s potential is about to be realized on a huge scale. Here are just five ways we can expect to see sci-fi-like technology becoming part of our world in the near future:

1. Airships to Deliver Houses to Remote Areas

Scientists have been enamored with the idea of airships since the first hydrogen air balloon made it across the English Channel in 1795. While passenger dirigibles gained popularity in the early 1900s, the Hindenburg disaster in 1937 crushed public confidence in airship travel.

Fast forward to 2016 and helium airships are hitting the market powered by supercapacitors fitted with curved graphene. The main advantage of airships over helicopters and planes is their ability to lift off and land without a runway, making them particularly useful for carrying heavy equipment to remote areas. Supercapacitor manufacturer Skeleton Technologies has teamed up with French startup Flying Whales to build a 60-ton large capacity airship designed to transport prefab houses and other large objects, like wind turbines, to remote areas. An electric propulsion system leaves a much smaller environmental footprint at a lower cost. Houses delivered by airship? Sounds sci-fi, but we can expect to see industrial production as soon as 2020. 

2. Robots to Clean the Ocean

Unless we do something drastic to reverse the trend, by 2050 there could be more plastic waste than fish in the world’s ocean. While scientists have been testing viable options for cleaning water pollution for years, a recent breakthrough has led to a promising new solution in the form of graphene robots. While a swarm of graphene coated nanobots capable of cleaning lead from wastewater sounds pretty sci-fi, that’s exactly what an international team of scientists have recently developed.
According to a paper published in the journal Nano Letters, these revolutionary nanobots could remove 95% of toxic lead present in a body of water in just one hour. Even more impressive, these microscopic robots can be reused after being stripped of the collected lead ions through an acidic bath. Further testing will focus on expanding their hovering abilities to additional metal pollutants. The bots utilize a graphene oxide exterior to absorb lead or other heavy metals, a nickel core that allows scientists to control the bots’ movement via magnetic field, and an inner platinum coating that reacts with hydrogen peroxide to create an “engine” that self-propels the bots forward through the water. This is a huge breakthrough demonstrating nanotechnology’s potential environment-saving applications.

3. Solar Panels to Store Energy When It’s Raining

Solar panels have grown increasingly efficient as a means of storing energy, but their dependence on the sun makes them impractical for daily energy needs on a global scale. Enter graphene’s amazing conductivity, which a team of scientists in Qingdao, China are exploiting to develop a new kind of prototype solar cell that generates power from raindrops.

By coating solar cells with a layer of liquefied graphene, scientists found that raindrops—which contain positively charged ions—adhered to the graphene surface and stacked to form layers with a potential energy difference between them strong enough to produce electrical current.

The prototype still needs refining, but the potential applications for solar energy in areas with extended rainy seasons and limited access to traditional energy sources will be game changing.

4. Electrodes to Build Better Brains

Researchers at the University of Trieste in Italy and the Cambridge Graphene Centre have demonstrated how graphene could be used to make better brain electrodes to treat various medical conditions like motor disorders and paralysis. When embedded in the human brain, these electrodes would interface with nerve cells without damaging the cells’ integrity.

Again, it’s graphene’s amazing conductivity that comes into play here, making it a natural winner for electrodes. Traditionally electrodes have been made out of tungsten or silicon, which lose their conductivity over time, as scar tissue forms over the area of implant. Graphene’s ability to withhold conductivity makes it a very promising material for the future of deep brain implants, which may hold the key to breakthrough treatments for Parkinson’s and other degenerative diseases.

5. Computers Operating at the Speed of Light

Silicon Valley is starting to worry that the end of Moore’s Law is in sight now that chip technology is just a few years away from scientists being able to manipulate materials on the atomic level. It’s hard to get much smaller than an atom, so maybe the solution lies in a new material that will dethrone silicon. Graphene is a contender, but it’s limited by the fact that it has no bandgap in its molecular structure, making it difficult to retain data in addition to sending it at super fast speeds. For now, IBM believes carbon nanotubes may be a better chip alternative.

However, graphene could upend the entire industry by moving us from electric to light powered computers. Since photons can move information much more quickly than electrons, many believe the future of computing lies in optic technology. Once scientists solve the complex optic computing puzzle, we can expect to see graphene as a main player in our future devices. Bendable smartphones are just the beginning.

Into the Sci-Fi Future

Now that graphene has promised to carry our houses, clean our oceans, make energy from raindrops, upgrade our brains, and supercharge our computers, we’re really not that far away from a Jetsons-like future. Now we just need flying cars to commute to our three day workweeks.

Ethics — The Next Frontier For Artificial Intelligence

Ethics — The Next Frontier For Artificial Intelligence

This post was originally featured on TechCrunch.com

AI’s next frontier requires ethics built through policy. Will Donald Trump deliver?

With one foot in its science fiction past and the other in the new frontier of science and tech innovations, AI occupies a unique place in our cultural imagination. Will we live into a future where machines are as intelligent — or frighteningly, more so — than humans? We have already witnessed AI predict the outcome of the latest U.S. presidential election when many policy wonks failed.

Perhaps we are further along than we thought.

In October, then-President Obama hosted the White House Frontiers Conference, which focused on the leading global technologies featured in the November issue of WIRED, which was guest edited by Obama. Given that our country was founded by innovators and disruptors who envisioned and executed exciting new technologies of their time — like the postal service, the precursor to our inbox struggles — it feels like we’re coming full circle to have our then-current president comment on the next wave of technology that will keep the U.S. on the forefront of innovation. Artificial intelligence is at the heart of that innovation.

Of course, now that the 2016 election has come to pass, there’s an elephant in the room: Donald Trump. What President Trump will have to add to this conversation is, as of now, another great mystery.

Though many might argue the point, Obama put it to WIRED that now is the best time to be alive. The next four years will admittedly be different, but the point stands. There are no real reasons to believe technology’s rapid pace will slow much, and the same goes for AI.

So far, we’ve seen just the tip of the AI iceberg through technology such as virtual personal assistants, self-driving cars and credit card fraud prediction technology. If we pause for a second to recognize how incredible it is that we can ask our phones for directions, then sit back as our self-driving Uber takes the wheel, as well as get an email update instantly when our credit card may have been hacked based on algorithmic learning, then we might just feel like our current reality is a sci-fi plotline.

The implications of AI will be important moving forward, and may require more attention on a federal level. In a report released by the White House on the current and future state of AI, leading innovators considered not only the technology that will drive AI, but also the ethical considerations that must fuel its growth. How should we regulate automated cars to ensure public safety? How can AI be used to streamline government operations and provide new jobs (for humans)?

No other technology has the kind of far-reaching, global implications that artificial intelligence does across so many industries, from healthcare to transportation. Which is what makes it so fascinating. For a technology designed to mimic human-like intelligence, artificial intelligence captures our collective imagination in a way no other technology ever has.

Machines have already surpassed humans in terms of image recognition ability. In the next 20 years, experts predict that machine learning will continue to make great strides on a number of human tasks. If this innovation is done in an ethical way, we can build a future in which humans are not competing with machines or being overtaken by robots, but instead entering into a new era of collaboration that frees up the human spirit for more meaningful tasks that require emotional intelligence.

This is where public policy must keep pace with the rapid advances happening in AI technology. It is not unreasonable to hold Donald Trump accountable for ensuring such policies are protective of both private and public interest.

Uber recently debuted a fleet of self-driving cars in Pittsburgh. As mind-boggling as hailing an automated taxi at the tap of a smartphone screen is, imagine the mental traffic jam caused by trying to regulate transportation and safety laws to accommodate for this next wave of driverless cars. The DOT has their work cut out for them, as too will all of our elected officials.

But the technology AI has already brought to life is still narrow in focus compared to its far-reaching potential. Scientists call this narrow versus general AI, and while we’re still living in the age of the former, it’s only a matter of time before we achieve the latter: an age in which machines can function across a wide array of tasks as intelligently as a human. Voice-command personal assistants and driverless cars are pretty cool, but they’re like riding a bike with training wheels. We’ve only just begun.

This is just the type of problem that makes AI a uniquely American challenge in so many ways. As we create new technologies that will alter the fabric of our daily lives, we must simultaneously implement the policy solutions that will protect AI from going the way of sci-fi movies — in which machine learning approaches the singularity and humans are trumped by the very machines they’ve created. We must take a democratic approach to future technology. We have to ensure that science is aligned with the public good.

Donald Trump, then, will have to somehow reckon with the inevitability of automation, even as he promises to bring back manufacturing jobs. Not everyone has high hopes; in fact, 100 technology leaders pled in an open letter that a Trump presidency would be a “disaster for innovation.” That one-fourth of entrepreneurs are immigrants, and tech companies are apt to recruit talent overseas, could bolster this theory considering Trump’s harsh stance on immigration. Nor does it bode well that Trump is likely to put national security interests over those of privacy.

But fear not. Given that sectors like technology move globally and have power beyond policy, their path won’t much be hindered by a Trump presidency. Even better, advances in artificial intelligence are already supporting the notion of a future in which machines strengthen human perception, rather than deplete us of it. For instance, one study cited in the White House report surveyed a radiology experiment in which images of lymph nodes were displayed to AI technology and a radiologist to determine whether they were cancerous. When AI and the radiologist pooled their knowledge and approached the task as a team, the error rate dropped to 0.5 percent. This is an 85 percent reduction in error! Studies like this make an undeniable case for the good side of AI.

As for preventing the sci-fi side of AI? That’s where policy must intervene, to rise to the challenges of machine learning. Though Trump has no clear policies on the matter as of now, we can only hope technological democracy succeeds as we move forward, whether because of or in spite of American leadership. We may find that while his administration is less concerned about “ethics,” per se, the prospect of a harmoniously automated future is a deal too good not to secure.

Here’s What the Future of Augmented Reality Will Look Like

Here’s What the Future of Augmented Reality Will Look Like

We live in a world populated by virtual monsters and millions of people intent on catching them all. With the unprecedented success of Pokemon GO last summer, the tech world is receiving a loud and clear signal that consumers are ready to embrace augmented reality (AR). Monster hunting is only the beginning.

Meanwhile, virtual reality (VR) is also promising interesting new forms of content, especially for media consumption. Last August, Netflix unveiled its first foray into the medium with a VR promo for its new original show Stranger Things. And while the spooky atmospheric show is the perfect backdrop to immerse viewers in the 3D world of its characters, it excludes a huge percentage of the population by requiring a 360 VR viewer like Google Cardboard to participate. This is the crux of why VR has been slow to gain the kind of momentum that AR achieved in one fell swoop with Pokemon GO. Whereas virtual reality requires additional equipment that can be clunky or expensive, augmented reality requires nothing more than the technology we already have in our pockets. It’s the perfect medium for the smartphone age.

Virtual reality transports viewers into an entirely fictional world. Augmented reality seeks to blend elements of the real world with a virtual one, creating a space with a seemingly limitless amount of variants to redefine not only how we play, but also how we work and live. And when we’re living in an age in which a giant traffic transcending bus is becoming a reality, it’s arguably more interesting to blend real world elements with virtual ones. Building more creativity into our daily tools will have a domino effect in terms of technological and scientific breakthroughs. Imagine what Einstein could have done with augmented reality at his disposal.

With startups like Magic Leap raising a staggering $1.3BN in funding for mixed reality technology that has been largely been kept under wraps, it’s clear that the future of tech resides in these mixed media spaces. Instead of passively consuming information, as we mostly do today on the internet, mixed reality allows us to have compelling immersive experiences. Magic Leap is creating a new interface using a lightfield chip to utilize your brain as a computer display, and among their teaser videos is a new way to start your day that blends your morning routine with a digital overlay. In a talk with Fortune, Magic Leap CEO Rony Abovitz calls Pokemon GO a “gateway to a whole new future that we’re building.”

Magic Leap recently partnered with Lucasfilm to create immersive Star Wars experiences, so you can bring R2-D2 and C-3PO directly into your home. Clearly the space is ripe for entertainment partnerships, but like the tablet moved from a gaming to a productivity tool, Magic Leap’s founders envision mixed reality (MR) technology transforming how we work just as much as how we play.

While the distinction between augmented reality and mixed reality is still being defined, all of our daily smartphone uses are ripe for a blended reality makeover of some kind. And it’s strangely fitting that Pokemon—a cult favorite game from the nineties—has resurfaced to lead the way into the AR future. After all, augmented reality tech has been around since the late 1960s, when the first head-mounted augmented reality system was created by Ivan Sutherland to show computer generated images. By the early 90s, the Air Force was using AR to allow the military to remotely control machinery. By 1998, the 1st & Ten technology changed the way we view football. From there, AR made its way more fully into the entertainment industry in a variety of media forms. It was only a matter of time before R2-D2 landed in your living room.

As we race into a future in which hunting virtual monsters is the new norm, we can expect to see AR technology redefining a number of industries beyond entertainment. Aviation companies Elbitt and ATR have recently developed AR headsets that they believe will help pilots land planes in poor visibility conditions that would previously require rerouting. With 3D visualizations of cockpit data to enhance pilot capabilities, the companies hope to receive certification by next year. From aviation to auto tech, augmented and mixed reality have the potential to boost safety by analyzing more factors than a human can process at once.

Meanwhile, Trillenium is among a crop of VR and AR companies looking to transform the retail shopping experience to combine the best of real-world and online shopping. Imagine being able to physically browse aisles without having to drive to the mall, park, and battle crowds. Virtual stores seem like the inevitable outcome of retail plus online shopping with a dose of mixed reality thrown in.

In the workplace, AR and mixed reality could be huge productivity tools, particularly in areas like training and product creation. A software developer could step into her world while it’s being built, rather than be removed by a monitor. Medical applications are also promising, from simulated controlled environments for PTSD patients to aiding rehabilitation for disabled patients.

While there are so many interesting applications for augmented and mixed reality on the horizon, what unites this technology is its promise of immersing us more fully in the world around us, rather than transporting us to a fantasy land. In an age where technology has often been criticized as an isolating and dehumanizing force, it’s exciting to see how augmented reality could foster real-world connections. If you need proof, just go play Pokemon GO in your nearest public park.

How Nanotechnology Is Changing Healthcare—And Life As We Know It

How Nanotechnology Is Changing Healthcare—And Life As We Know It

In the past few years, wearable tech and mobile apps have been dominating the healthcare startup scene. Zephyr created the Anywhere Biopatch, an FDA-approved monitoring device that can be attached to a patient’s chest to monitor minute-by-minute vitals. And Pager launched an on-demand doctor service so that no one will have to wait in a lobby to see their primary care physician ever again.

But it’s another industry—nanotechnology—that is poised to enter the healthcare market in a big way. Nanoparticles used in life sciences research already generates $30 billion per year. Nanomachines will have a huge impact on healthcare, and everything from stem cell research to gene therapy is fair game.

Here are just 3 recent nanotechnologies at the forefront of the healthcare revolution:

1. Cell repair and healing

By studying the way specific cells in our body operate, scientists believe they can create nanomachines that replicate cell repair and healing.

White blood cells and fibroblasts, in particular, are model cells for nanomachines. White blood cells can identify a problem, move through the bloodstream to the target site, and break down harmful pathogens. Fibroblasts produce collagen essential to the healing process.

Cell repair nanomachines would be a crucial development for cells that don’t naturally heal or are difficult to repair, like nerve cells and heart cells. By repairing severed or damaged nerves, nanomachines could restore sight and mobility and repair the damage done by heart disease.

Another recent breakthrough in cell repair uses nanolasers in a process called scanning probe lithography (which creates a map for stem cells to grow into). Scientists have been able to coax stem cells to grow into bone tissue using this method, and believe they can eventually create molds for every other tissue type as well.

2. Cancer detection

Another huge development comes from Google X, which is developing a pill that could detect early-stage cancer.

The pill would release magnetic nanoparticles into the bloodstream that attach to cancer cells. When used along with a wearable sensor, a patient would be able to detect the presence of cancer cells as soon as they appear.

The non-invasive pill would have a huge impact on cancer prevention, and could save millions of lives each year. It would also cut down on multiple hospital visits, expensive MRIs, and stress on cancer patients.

3. Diabetes detection and treatment

One of the most promising use cases for healthcare nanotechnology comes in the form of a cost-effective test for Type 1 diabetes.

Nearly all diabetes tests require blood samples, and even the most “convenient” ones require patients to prick their fingers with a needle. The traditional test that determines whether a patient has Type 1 or Type 2 diabetes is expensive, and can only be performed in a clinical setting.

But researchers at MIT have created a “nanotechnology tattoo” with nanoparticle “ink” that can be used to track blood glucose levels. And on the other side of the country, researchers at the Stanford University School of Medicine have created a handheld plasmonic microchip that anyone can use to distinguish between the two types of diabetes. Both technologies would make cost-affordable early diabetes diagnosis and prevention a reality.

But it gets even better. Patients with Type 1 diabetes have damaged islet cells, which are normally responsible for secreting insulin and keeping blood sugar low. Another team of researchers at MIT has created injectable nanoparticles that can sense high blood glucose levels and respond by accurately secreting just the right amount of insulin.

What does the future hold?

Not all of these technologies are fully realized. And many other types of groundbreaking nanotechnologies, like gene therapy, may not see the light of day for another 10 to 20 years. It takes a long time (and a lot of money) for promising academic research to make its way into a startup’s hands.

But nanotechnology is the next frontier for medicine, and we are heading towards it with every passing day.

Micro-Machines Win Nanotechnology Its First Nobel

Micro-Machines Win Nanotechnology Its First Nobel

While Bob Dylan garnered the most news for getting his Nobel for literature, for the science and tech community, there’s another big winner worth talking about. What’s that? Thanks to three pioneers in nanochemistry, nanotechnology has netted its first Nobel!

The 2016 Nobel Prize in Chemistry was awarded to Jean-Pierre Sauvage, Sir James Frasier Stoddart, and Bernard L. Feringa for creating the world’s smallest machines: molecules with miniscule motors and controllable movements. Add energy, and these molecular machines can perform numerous tasks.

The miniaturization of technology could lead a revolution. With the miniature motor in the same stage the electric motor was in 1830, we have a long way to go down a road with endless possibilities. According to the Royal Swedish Academy of Sciences, Molecular machines will most likely be used in the development of things such as new materials, sensors and energy storage systems.”

This accomplishment has been decades in the making, and all three men have made significant strides in research over the years. Sauvage made the first big breakthrough in 1983 by successfully linking two ring-shaped molecules. Stoddart made the second breakthrough in 1991 when he threaded a molecular ring onto a thin molecular axle, and from there developing a molecular lift, molecular muscle, and molecule-based computer chip.

The last breakthrough was made by Feringa, who developed the first molecular motor in 1999. 15 years later, that motor could spin 12 million times per second. In 2011, his team used them to power a tiny molecular car.

The implications of their innovation could be huge. Think of tiny robots able to travel through a person’s bloodstream to deliver medicine, new materials like graphene, or tiny, powerful supercomputers.

What else but the Nobel’s have awards for both lyrical and chemical masterminds? The answer, my friend, is blowin’ in the wind. As for me, I’m glad to live in a world wide enough to recognize all types of genius. See this video for more explanation on this awesome accomplishment. 

Self-Driving Cars’ Toughest Obstacle? Us

Self-Driving Cars’ Toughest Obstacle? Us

Much ado has been made about the future of autonomous vehicles, whirring smoothly down city streets, carting passengers from place to place safely and soundly. But the phenomenon–which is well underway in testing stages–is not without obstacles. The major roadblock? You and me.

That’s right. The self-driving car itself, though far from flawless, won’t in itself be the problem that could fail us. It would be our inability to actively cooperate with the technology where things get unsafe—or at least according to an article published in Popular Science. It seems that we’ve been wrong about driverless cars in believing that such cars can actually be, well, driverless. 

Recently, Tesla’s autopilot mistook a tractor trailer for a road sign, leading to a deadly collision in Ohio. The technology failed, and the consequence was undeniably devastating. Has the public been misled about the role of autonomous vehicles—or more specifically, our role in keeping them in check? 

Thus far, self-driving cars have been hyped as a driverless utopia in action, wherein passengers can sit back, relax, finish work, or take a nap without worrying about their safety. This is a nice idea, but not an accurate one right now. Companies like Tesla with autopilot features still recommend that a driver is present to keep their eye on the road. As the article puts it, “shared control is the name of the autonomous-driving game.”

It’s okay to feel a bit duped by this. Driverless cars have been marketed as safe, and above all, trustworthy. But it’s important to read the disclaimers. For example, Youtube clips may show videos of people playing games while their car chauffeurs them around town, but the video description says: “DISCLAIMER:…The activities performed in this video were produced and edited. Safety was our highest concern. Don’t be stupid. Pay attention to the road.” In other words, what you’ve seen is a fiction—don’t try this at home.

Automakers need to make clear that drivers should be available at the wheel to quickly detect problems and step into correct them if needed. This may require training of some sort, but it’s worth it. If people are blindly trusting and out of the loop, there will be problems. Cars could also be designed with obvious hand-off signals—like flashing light, or beeps—that alert passengers when they need to take over.

Sometime in the future, autonomous vehicles may live up to their name and the vision futurists and advertisers have set into motion. For now, we must all proceed carefully by—at the very least—reading the instructions before taking new tech for a test drive.

Artificial Intelligence Software “Flow Machines” Composes First Pop Song

Artificial Intelligence Software “Flow Machines” Composes First Pop Song

Pop songs — you either hate them or love them, but most people feel some sort of way about their evolution over the years. Many have argued that music has become generic over time; with creative genius hard to come by, tunes are blandly catchy as if spit out by a machine.

Well, now we have pop music literally spit out by a machine. Since songs, good and bad, are highly formulaic, it makes sense that a computer could replicate them given the right algorithm. Now, at long last, we have robots to rival robotic pop stars of the world.

The world’s first song composed by an AI comes to us from Sony CSL Research Labs, where a system called Flow Machines was input with a wide database of sheet music of songs in various styles. The finished compositions were produced, mixed, and put to words by French musician Benoit Carre.

Daddy’s Car, below, was written the AI in the style of the Beatles. If you take a listen you’ll realize how well the machine nailed the iconic British foursome’s sound.

Another song, Mr. Shadow, was composed by the AI in the more general style of “American songwriters.” Take a listen here:

Stumped as to how a computer could create these kinds of works?

Here’s a quick rundown of how it works.

  • Step 1: The database, called LSDB, is set up with 13,000 leadsheets in styles including jazz, pop, Broadway and other music styles.
  • Step 2: A human musician selects a style using a system called FlowComposer; new leadsheets are generated by the AI based on this selection.
  • Step 3: The human uses the system Rechord to match audio chunks to the generated composition.
  • Step 4: The human musician completes production and mixing.

Clearly, this AI still needs a lot of human prompting before the song is complete, so you can rest assured that no robotic Taylor Swift is writing songs unchecked. Still, this is a pretty amazing step in machine learning and artificial intelligence technologies. Who knows — maybe this is exactly what the music industry needs to become innovative again.

Into the Future: A Camera as Small as a Grain of Salt

Into the Future: A Camera as Small as a Grain of Salt

Researchers out of the University of Stuttgart have discovered a way to make tiny cameras that are capable of taking incredibly sharp photos. In a new paper published in Nature Photonics, researchers describe how they’ve been able to build lenses so small that they measure up to a single grain of salt.

How can a camera this small even be constructed? Just thank the magic of 3D printing. Technology this tiny obviously has to be manufactured in a single, seamless piece. Thanks to 3D printing, any configuration of this lens that can be designed on a computer can be printed and used. This is a huge breakthrough as technology steadily moves toward miniaturization, a trend that has previously been hampered by design flaws arising when it comes to conceivable production methods.

This new production method involves using a Nanoscribe 3D printer to send laser pulses of the lens design onto strands of optical fiber and digital sensors like those in current cameras—only much smaller. The potential applications of such a miniature lens are vast. From less invasive medical imaging hardware to sensors on drones and robots, a tiny lens will find uses in everything from medicine to transportation fields.  

We are likely to see these tiny lenses implemented in devices in the near future, as the production doesn’t require expensive equipment. This is a huge win for 3D printing, which has typically been derided for its less than practical uses. Now we are finally starting to see viable game-changing technology coming out of 3D printing’s unique capabilities on the small scale. 3D printing may have been initially dismissed as the latest trend that lacked staying power, but we’ve already seen its useful applications in the medical field coming to fruition.

Miniature lenses might be the start of an even more exciting trend in future technology. While it’s been the stuff of science fiction for years, the concept of “smart dust”—combining sensors, antenna and computers on the microscopic scale—is moving much closer to reality thanks to this new method of producing mini lenses.

Essentially, smart dust envisions a future in which microscopic high-tech computers are so small and so light that they can be deployed into the wind and scattered around the world to collect and monitor unforeseen amounts of data. The Internet of Things and the current Big Data wave would pale in comparison. We still may be a few years from seeing actual smart dust blowing in the wind, but lenses as small as a grain of salt are a sure sign that we’re moving in that exciting direction.

Microsoft’s Bet on Conversational Intelligence

Microsoft’s Bet on Conversational Intelligence

Hot off the heels of its huge acquisition of LinkedIn, Microsoft is betting on another, lesser known startup to give it an edge in the conversational intelligence race. Wand Labs is a tiny startup with just seven employees, but Microsoft saw enough promise in the messaging app technologies they’ve been building since 2013 to acquire Wand this past month.

So how does this acquisition fit into Microsoft’s larger strategy of moving away from being a software company to positioning itself as a nimble cloud and mobile contender? According to the announcement on Microsoft’s official blog, “Wand Labs’ technology and talent will strengthen our position in the emerging era of conversational intelligence, where we bring together the power of human language with advanced machine intelligence — connecting people to knowledge, information, services and other people in more relevant and natural ways. It builds on and extends the power of the Bing, Microsoft Azure, Office 365 and Windows platforms to empower developers everywhere.”

So what is conversational intelligence and why is it so important? We are moving into a future where we can expect to see messaging technology acting intelligently, with interfaces that allow collaborative tasks, such as song sharing or allowing your friend to control your Nest thermostat. This is part of a larger industry trend of building bots and virtual assistants that can handle the smaller tasks of life through simple voice or swipe command. Microsoft’s acquisition of Wand Labs signals their willingness to bring on new talent to move their capabilities beyond what they’ve already done with Cortana, the company’s personal assistant app.

Wand Labs was founded by Vishal Sharma, a veteran of Google who has been ahead of the intelligent apps curve for years. His expertise will be a big asset as Microsoft makes inroads in third party developer integration, semantic ontology and service mapping. Microsoft CEO Satya Nadella calls this “Conversation as a Platform,” and will be integral to the future integration of all the disparate tech we use on a daily basis. Stay tuned to see what the Wand and Microsoft team will roll out in the near future.

3D Printing Body Parts: Where Scientists Are & What Comes Next

3D Printing Body Parts: Where Scientists Are & What Comes Next

3D printing is one of the latest technological advances of the modern age but few people have even made use of 3D printers at home or in their office. Industrial manufacturing companies are tapping into 3D printing to produce everything from jet engine parts to soccer cleats, reports PricewaterhouseCoopers. Now, scientists and medical professionals are taking the lead on re-creating human tissue and body parts using 3D printing technology.

The future of healthcare and medicine may very well involve implants and tissues using 3D printing. Here’s a closer look at where scientist are now, and what is coming next:

3D Printing for Implant Surgery

Surgeons and medical professionals have been trying to find effective solutions for bone grafting and joint replacement techniques for years, often turning to a patient’s own bone and tissues as a donor or resorting to cadavers and animals for donor tissue. Many surgeons use synthetic grafting materials made with compounds that easily integrate with human bone and tissue.

With 3D printing, we would be able to manufacture bone and joint tissues completely customized for the patient. 3D printing can create real, living tissues and organs ready for implantation.

Mashable recently reported on the world’s first implant surgery using 3D-printed vertebrae. A neurosurgeon at the Prince of Wales Hospital in Sydney, Australia, treated a patient who had a tumor in his spine using a custom-printed body part created with a 3D printer.

Removing the tumor with traditional surgical methods was too risky because of its location. Without treatment, the tumor would have caused compression of the brain and spinal cord which would render the patient quadriplegic. Thanks to the 3D-printed implant, the surgeon was able to perform a successful surgery. The surgeon worked with medical device company Anatomics to create a titanium implant using 3D printing technology.

The Future of 3D Printing Body Parts

Medical research on a 3D bioprinting system that can produce human-scale tissue with structural integrity has been published in Nature. The authors highlight the fact that future developments could mean we will be able to build solid organs and complex tissues.

The Integrated Organ and Printing System (ITOP) uses biodegradable material to create tissues and water-based ink to hold cells together to recreate bio-compatible tissues. Science Magazine reports how the ‘tissue printer’ creates printed materials with live cells. The final product has a fully developed blood supply and internal structure that looks and functions just like real tissue.

These live materials could be used as transplants to complement a variety of surgical procedures. Considering that more than 121,000 people are on the waiting list for an organ transplant in the United States alone, according to the U.S. Department of Health and Human Services, 3D printing live, transplantable-tissue and organs could essentially save lives.

3D printing technology is evolving at a rapid pace and is making notable waves in the scientific and medical communities. Using synthetic grafting materials, or even resorting to metal implants for bone and tissue replacement surgeries, could soon be a thing of the past. Surgeons and scientists are developing new ways to treat patients, creating ‘living’ tissue, organs, and body parts made with bio-compatible materials and 3D printing technologies.

Photo: Wake Forest Institute for Regenerative Medicine

Nanotechnology Could Hold the Key to Self-Cleaning Clothes

Nanotechnology Could Hold the Key to Self-Cleaning Clothes

Today’s washing machines use a whopping 27 gallons of water to wash a single load of clothes. In the near future, we’ll not only be saving ourselves time and money, but also making a huge environmental step forward in how we clean clothes thanks to a new nanotech breakthrough.

New research out of RMIT University in Melbourne, Australia has developed a cost effective and efficient new method for cleaning clothes that builds the cleaner right into the garment. By growing special nanostructures capable of degrading organic matter when exposed to sunlight directly onto a textile, scientists hope to eliminate the washing process entirely.

Just imagine that spilling something on your shirt would require only stepping into the sunlight to have the shirt eliminate the stain. While this sounds like science fiction, research into smart textiles has been going on for some time now, and this latest breakthrough could have practical applications for catalyst-based industries such as agrochemicals, pharmaceuticals and natural products. The technology could also be scaled up to industrial and consumer applications in the future.

“The advantage of textiles is they already have a 3D structure so they are great at absorbing light, which in turn speeds up the process of degrading organic matter,” said Dr. Rajesh Ramanathan, lead researcher on this exciting project.

The particular nanostructures capable of absorbing light are copper and silver based varieties. When exposed to light, the structures receive a boost that makes them release “hot electrons,” which can degrade organic matter.

We’re not quite at the stage of throwing out our washing machines just yet, though. The next step is for researchers to test these nanostructures in combination with organic compounds more relevant to consumer apparel. How would these hot electrons stand up to the dreaded ketchup or wine stain?

For more on this exciting breakthrough, check out the findings, presented in the journal Advanced Materials Interfaces. Stay tuned for progress on this “strong foundation for the future development of fully self-cleaning textiles.”

Photo: RMIT University

How I-SDS Lets Enterprises Ride the Big Data Wave

How I-SDS Lets Enterprises Ride the Big Data Wave

In 2011 venture capitalist Marc Andreessen correctly predicted that software and online services would soon take over large sectors of the economy. In 2016 we can expect to see software again revolutionizing the economy, this time by eating the storage world. Enterprises that embrace this new storage model will have a much easier time of riding the big data wave. It’s no secret that data is the new king.  From the rise of big data to Artificial Intelligence to analytics to machine learning, data is in the driver’s seat. But where we’ve come up short so far is in managing, storing, and processing this tidal wave of information. Without a new method of storing this data so that it’s easy to sort, access, and analyze, we’ll get crushed by the very wave that’s supposed to carry us to better business practices. Storage’s old standby, the hardware stack, is no longer the asset it once was. In the age of big data, housing data on hardware is a limitation. Instead, a new method is emerging that allows for smarter storing, collecting, manipulating and combining of data by relying less on hardware and more on—you guessed it—software. But not just any old software. What sets Intelligent Software Designed Storage (I-SDS) apart is that its computational model moves away from John von Neumann’s longstanding model towards a design that mimics how the human brain processes vast amounts of data on a regular basis. After all, we’ve been computing big data in our heads our entire lives. We don’t need to store data just to store it—we need to have quick access to it on command. One such example of an I-SDS uses a unique clustering methodology based on the Golay Code, a linear error correcting code used in NASA’s deep space missions, among other applications. This allows big data streams to be clustered. Additionally, I-SDS implements a multi-layer multiprocessor conveyor so that continuous flow transformations can be made on the fly. Approximate search and the stream extraction of data combine to allow the processing of huge amounts of data, while simultaneously extracting the most frequent and appropriate outputs from the search. These techniques give I-SDS a huge advantage over obsolete storage models, because they team up to improve speed while still achieving high levels of accuracy.

The key to successful I-SDS rests on three pillars:

I-SDS

1. Abstraction

The ability to seamlessly integrate outdated legacy systems, current systems, future unknown systems and even component level technologies is the hallmark of an SDS with a rich abstraction layer. This allows for a rich set of data services to act upon data with high reliability and availability. It also fosters policy and access control that provides the mechanisms for resource trade-offs and enforcement of corporate policies and procedures. SDS also supports the non-disruptive expansion of capacity and capability, geographic diversity and self-service models. Lastly, abstraction allows for capabilities to incorporate the growing public/private cloud hybrid infrastructures and to optimize their usage.

2. Analytics

Analytics has become the new currency of companies. Tableau (NYSE: DATA) and Splunk (NASDAQ: SPLK) have shown the broad desire for analytic and visualization tools that do not require trained programmers. These tools made analytics and visualization available to a broad class of users in the enterprise. User experience is a key component. Simplicity with power. Cloud and mobile accessibility ensure data is available, scalable and usable anywhere and anytime. Cloud brings scale in numerous dimensions – data size, computing horsepower, accessibility, and scalability. Multi tenant with role based security and access allow the analytics and visualization to be made available to a broad set of enterprise (and partner) stakeholders. This broad set increases the collective intelligence of the system. Cloud systems that are heterogeneous and multi-tenant allow analytics that cross systems and vendors, and in some cases, customer boundaries. This increases the data set rapidly by potentially creating a much faster and more relevant set of results.

3. Action

Intelligent Action is built on the creation of full API based interfaces. Making available APIs allows extension of capabilities and the application of resources. Closed monolithic systems from existing and upstart vendors basically say, “give me your data and as long as it’s only my system I will try and do the right optimization.”  Applications and large data sets are complex; it is highly unlikely that over the 10 year life of a system that an Enterprise will not deploy many different capabilities from numerous vendors. An Enterprise may wish to optimize along many parameters outside a monolithic systems understanding, such as cost of the network layer, standard deviation of response time, and percentage of workload in a public cloud. Furthermore, the lack of fine-grained controls over items like caching policy, data reduction methods make it extremely difficult to balance the needs of multiple applications in an infrastructure. Intelligent Action requires a rich programmatic way – a set of fine grained API’s – that the I-SDS can use to optimize across the data center from application to component layer.

Into the Future

This type of rich capability is what underlies the private clouds of Facebook, Google, Apple, Amazon, and Alibaba. ISDS will allow the Enterprise to achieve the scale, cost reduction and flexibility of these leading global infrastructures. I-SDS allows the enterprise to maintain corporate integrity and control its precious data.The time has come for software to eat the storage world, and enterprises that embrace this change will come out on top.

IT Revolution: How In Memory Computing Changes Everything

IT Revolution: How In Memory Computing Changes Everything

In 2000, a relatively unknown entrepreneur at the Intel Developer Forum said he’d like to take the entire Internet, which then existed as bits on hard drives scattered around the world, and put it on memory to speed it up.

“The Web, a good part of the Web, is a few terabits. So it’s not unreasonable,” he said. “We’d like to have the whole Web in memory, in random access memory.”

 

The comment raised eyebrows, but it was quickly forgotten. After all, the speaker, Larry Page, wasn’t well known at the time. Neither was Google for that matter, as the company’s backbone then consisted of 2,400 computers.

Flash forward to today. Google has become one of the world’s most important companies, and 2,400 servers would barely fill a corner in a modern datacenter. Experts estimate that Google now operates more than 1 million servers. And the Web has ballooned way past a few terabits.Facebook alone has 220 billion photos and juggles 4.5 billion updates, likes, new photos and other changes every day.

But Page’s original idea is alive and well. In fact, it’s more relevant than ever. Financial institutions, cloud companies and other enterprises with large data centers are shifting toward keeping data ‘in memory.’ Even Gartner picked In-Memory Computing (IMC) as one of the top ten strategic initiatives of 2013.

Data Center History In the Making

Chalk it up to an imbalance in the pace of change. Moore’s Law is still going strong: microprocessors double in performance and speed roughly every two years. Software developers have created analytics that let researchers crunch millions of variables from disparate sources of information. Yet, the time it takes a server or a smartphone to retrieve data from a storage system deep in the bowels of a cloud company or hosting provider on behalf of a business or consumer hasn’t decreased much at all.

Then as now, the process involves traveling across several congested lanes of traffic and then searching a spinning, mechanical hard drive. It is analogous to having to go home and get your credit card number every time you want to make a purchase at Amazon from work.

The lag has forced engineers and companies into unnatural acts. Large portions of application code are written today to maximize the use of memory and minimize access to high latency storage. Likewise, many enterprise storage systems only use a small portion of the available disk space they buy, storing data on the outer edges of disks reduces access time. To use another analogy, it is like renting an entire floor in an office building, but only using the first fifteen square feet near the elevator so people can get in and out faster during rush hour.

IMC ameliorates these problems by reducing the need to fetch data from disks. A memory fabric based on flash can be more than 53 times faster than one based around disks. Each transaction might normally take milliseconds, but multiply that over millions of transactions a day. IMC architectures vary, but generally they will include a combination of DRAM, which holds data temporarily, and arrays based on flash memory, which is almost as fast but is persistent.

The shift will have a cascading effect. Moving from drives to flash allows developers to cut many lines of code from applications. In turn, that means fewer product delays and maintenance headaches.

The Future of In Memory Computing

Some companies have already adapted IMC concepts. Social network Tagged.com was architected under the assumption that it will always retrieve data from the memory tier. SAP’s HANA only addresses non-volatile memory. Oracle is making a similar shift with Exadata, now combining DRAM and flash into a ‘memory tier.’ To SAP and Oracle, the Rubicon has been crossed. In tests, HANA has processed 1,000 times more data in half the time than conventional databases. IMC will usher in an entirely new programming model and ultimately a new business model for software companies.

With IMC-based systems, your data center would go on a massive diet. Right now, servers in the most advanced data centers are sitting around with nothing to do because of latency: even Microsoft admits servers are in use just 15 percent of the time. Think of it: 85 percent of your computing cycles go to waste because the servers are waiting for something to do. That is a massive amount of excess overhead in hardware, real estate, power consumption and productivity.

We did some calculations on what would happen if you redesigned a data center with memory-based storage systems. You could store 40 times as much data in the same finite space. It takes 4 racks of disk storage to create a system capable of 1 million IOPS, or input/output operations per second. It would take only one shelf of a flash-based storage system. Energy consumption would drop by 80 percent since memory-based systems consume less energy and require fewer air conditioners.

The metrics around in-memory computing will continue to get better. In the future, it may be possible to produce systems with hundreds of petabytes, or systems that can hold all of the printed material ever produced times five. All of this data would be instantly available to applications allowing for faster and more accurate decision making.

A shift to In-Memory Computing will allow Big Data analytics to sing. Think again about how IMC requires software reconfiguration. Reducing excess software code will accelerate performance. Speed is absolutely crucial for predictive analytics to succeed. The Internet of Things – where inanimate objects and sensors will be collecting data about the real world all the time – will become manageable. You will know what’s going on in near real-time – rather than waiting around.

This post was originally published on Forbes.com

Here’s How Graphene Will Let Us Read DNA Directly

Here’s How Graphene Will Let Us Read DNA Directly

nistsimulate

The wonder material graphene has recently led to another exciting scientific breakthrough, this time involving the building blocks of life. Whereas the process of reading DNA has so far been a laborious, expensive, and time consuming chemical process, a new breakthrough using graphene could transform the gene sequencing industry.

 

New research from the National Institute of Standards and Technology (NIST) has simulated how DNA sequencing could become much faster and more accurate through a nanopore sequencing process: a single DNA molecule gets pulled through a tiny, chemically active hole in a super thin sheet of graphene, allowing changes in electrical current to be detected.

This method suggests that about 66 billion bases, or the smallest units of genetic info, could be identified in just one second through this method. Even more impressive, the study has found the results to be 90% accurate with no false positives. If the simulation proves as effective in experiments, this could be a huge breakthrough in several fields that utilize genetic information, including forensics.

While the concept of nanopore sequencing—pulling electrically charged molecules through a pore in a thin material—has been around for at least 20 years, using graphene as that sheet solves some of the major side effects that have hampered the process. Because of graphene’s unique chemical properties and it’s extreme thinness, four graphene nanoribbons could be bonded together to form an integrated DNA sensor. While the scientific properties at play in this process are quite complex, this video of the simulation breaks it down pretty clearly. If you’re interested in a more complex scientific explanation, check out this article from phys.org.

The major benefit of this new approach to DNA sequencing is that it would make the process much more real-world applicable. It would eliminate the need for costly computers and complex lab setups. Once NIST perfects its method and proves its success in real world conditions, we can expect to see huge strides made in DNA sequencing.

Nanotech’s Quest to Clean Up the Environment

Nanotech’s Quest to Clean Up the Environment

Nanoparticles are so small that they remain undetected by the human eye, but we interact with them in the products we use everyday. From cosmetics to sunscreen to plastics, we’ve become heavily reliant on these tiny particles to strengthen and prolong the shelf life of household products.

Another class of nanoparticles such as graphene are finding revolutionary new ways to do everything from clean nuclear waste to build better batteries to engineer stronger smartphones. So it’s no surprise that these tiny particles have embarked upon a huge new quest to clean up the environment from harmful chemicals. Read on for two exciting scientific breakthroughs that could change the way we clean up after ourselves here on Mother Earth.

1. Trap the Chemicals

When two pharmacists turned chemical researchers set out to develop nanoparticles to carry drugs to cancer cells, they never imagined that what they would discover instead was a revolutionary way to extract toxic chemicals from the ocean.

Led by Ferdinand Brandl and Nicolas Bertrand, a research team from MIT and the Federal University of Goiás in Brazil successfully demonstrated how nanoparticles and UV light can be used to isolate and extract harmful chemicals from soil and water.

Toxic materials including pesticides often resist degradation through natural processes, meaning they linger in the environment long after they’ve served their purpose. These pollutants are harmful not only to humans and animals, but they also make it harder for Mother Nature to remain self-sustaining. What if a simple process using light and microscopic particles could effectively extract and isolate these toxic chemicals from the environment?

How Brandl and Bertrand were able to achieve this feat is scientifically complex, but the concept is beautifully simple. First they synthesized polymers made from polyethylene glycol—an FDA-approved compound you’ve likely used countless times in tubes of toothpaste or bottles of eyedrops. These polymers are biodegradable.

Because of the molecular nature of these polymers, they would normally remain suspended and evenly dispersed in a solution such as water. However, what the research team found was that by exposing the polymers to UV light, the polymers exhibited a new ability to surround and trap harmful pollutants in the water. Essentially, the polymers shed their shells and then cluster together around harmful pollutants, thereby allowing for easy extraction of the bad stuff by filtration or sedimentation.

The team demonstrated how this innovative method could extract phthalates, which are chemicals commonly used to strengthen plastic. As phthalates have recently come under fire for their wide ranging potentially harmful health effects, this method for removing them from wastewater could have huge benefits. The researchers also removed BPA from thermal printing paper samples and carcinogenic compounds from contaminated soil. Not too shabby for a microscopic particle and some light rays!

This method could prove a huge breakthrough for cleaning up the environment as its effects are irreversible and the polymers used are biodegradable. The really exciting news here, according to researchers, was proof positive that small molecules can in fact adsorb passively onto nanoparticle surfaces. For a more technical description of how this process will be a huge game changer, check out this article from MIT.

2. Shake out the Contaminants

Meanwhile, researchers in the physics department at Michigan Tech have found another way to potentially use nanomaterials to clean the ocean. Using the basic scientific principle that oil and water do not mix, a team led by research professor Dongyan Zhang demonstrated a method of shaking pollutants out of liquids that could be scaled up to clean the ocean.

Unlike polyethylene glycol polymers, many nanoparticles used in commercial products like makeup and sunscreen are not biodegradable, and their effects on the ocean are a huge problem. Zhang’s team tested the shake-to-clean method on nanotubes, graphene, boron nitrite nanosheets, and other microscopic substances. They found that shaking out the contaminants from such tiny particles could be a much more effective method than mesh or filter paper.

So far the research team has successfully extracted nanomaterials from contaminated water in tiny test tubes with just a minute of hand shaking. The next step will be to figure out how to scale up this solution so it can be a viable means of cleaning the contamination out of a source of water as big as the ocean.

Scientists on the forefront of researching nanoparticles as tiny trash compactors are taking all kinds of interesting approaches to how best to clean the environment, but they all have one thing in common: the simplest methods are often the best methods, especially when it comes to complicated problems.

How Big Data is Optimizing the Classroom

How Big Data is Optimizing the Classroom

Over the past decade, data science has unlocked huge stores of information that enables companies to tailor specifically to their consumers. Big data has allowed companies like Amazon and Alibaba to create complex algorithms that can predict consumer shopping patterns and make product suggestions with a high level of accuracy. Only recently has big data made a play for influencing education with the same level of personalization. While big data is just stepping into the classroom, we can expect to see huge transformations in the next five years in how teachers teach and how students learn.

The old teaching model is outdated for today’s world. A recent study by Columbia University found vast improvements for 6,000 middle school math students in schools across the country when teacher-led instruction was coupled with personalized learning tools. The study found that this approach fostered 1.5 years of progress in math over the course of one school year, or 47% higher than the national average. Personalization is the key to higher education, where one size clearly does not fit all. Since no two students are exactly alike, neither should the tools we use to effectively teach them.

Big data is increasingly able to provide such personalization through artificial intelligence that transforms data into adaptive and customized interfaces.  Effective personalization in learning tools will come from two areas of computer learning: interfaces that learn from user actions and preferences, as well as those that learn from the overall network to make helpful inferences. Think of Netflix’s recommendations based on your past viewing history, and of Spotify recommendations based on what other similar users are streaming. By moving away from fixed lesson plans and rigid testing to adaptive assessments driven by technology, big data becomes smart data. Students become more active learners with proven results that will drive the economy. According to estimates by McKinsey, increasing the use of student data in education could unlock between $900 billion and $1.2 trillion in global economic value.

Recently Apple and IBM have turned their analytical expertise beyond enterprise to this huge untapped sector by jointly developing a Student Achievement App. The partnership will roll out real world testing in select U.S. classrooms by 2016. The app is being described as, “content analytics for student learning.” It will provide teachers real time data analytics about each of their students’ progress, ultimately transforming the educational experience from arbitrary to experiential.

College admissions offices are also harnessing the power of big data. Whereas traditionally applicants have mainly been filtered by standardized test scores, big data hopes to direct admission officers to smarter applicants who are most likely to stay for four years, graduate and go on to future success. Ithaca College, for instance, has been using social media data of applicants since 2007, when it launched a Facebook-like site for potential students to connect with each other and with faculty. Through statistical analysis of this data, admissions officers were able to see which student behaviors led to four year enrollment. In other words, user engagement signals how interested a potential student is in Ithaca College. Universities can use this data to achieve a high yield rate with lower costs. Essentially, big data provides admissions officers with a valuable measure of supply and demand.

From elementary classrooms to college campuses, big data has begun to reshape the way we learn in powerful ways. While it’s impossible to predict exactly what classrooms will look like in 2030, it’s clear that the next generation of students will learn smarter.

Is DNA the Perfect Place to Store Computer Data?

Is DNA the Perfect Place to Store Computer Data?

Nearly every aspect of our modern lives have become intertwined with computer data, so it makes sense that scientists would take this coupling one step further eventually. We are about to witness a data storage breakthrough in which digital information could be embedded into the primary fabric of our being: the double helix of DNA.

While this might sound like something straight out of a sci-fi movie, recent experiments led by Microsoft and the University of Washington, and separately by the University of Illinois have both demonstrated how DNA molecules may be an ideal basis for storing digital records. The most impressive part? Researchers say that all the world’s data could be stored in nine liters of solution. For reference, that’s a single case of wine.

While at first this may seem hard to imagine, storing data on DNA actually makes a lot of sense. After all, DNA is already an amazing data storage tool- storing all the info needed to create a healthy human being. It’s also remarkably sturdy for storing this info. Now that we can assemble synthetic DNA strands, it follows that we should be able to control what information gets stored on those strands.

DNA data storage is still in the research and development stage, but its eventual success will solve a few critical storage problems. First off, scientists believe this method could keep data safely stored for over a million years! Compared to the decades lifespan of current microelectronic data storage on disks, tape and optical solutions, this longevity would be a huge upgrade.

DNA is also a very space efficient storage method. Picture a grain of sand. A DNA molecule even smaller than that could potentially store up to an exabyte of info—or the equivalent of 200 million DVDs.

As the costs of producing synthetic DNA continue to fall, a hybrid storage solution may also be in the near future. This coupling of biotechnology and information technology would be a huge milestone in a partnership that dates back to the early 60s. After all, the first personal computer, the LINC, was developed for biomedical research purposes.

Researchers have already proven the ability to store specific data in DNA strands, and then later recall that data in digital form. To picture how this could work, imagine a file of a photo. That photo gets broken into hundreds of components that are then stored on separate DNA molecules. Researchers can encode a specific identifier that allows that picture to be put back together seamlessly when you need it again, like instantly assembling a jigsaw puzzle.

So far, the high cost of storing data in DNA is a prohibitive factor to putting this method to commercial use. But as new partnerships in biotech and computer science continue to explore this field, we’re bound to see a breakthrough within our lifetime. It’s well worth keeping an eye on, as the potential for revolutionizing how we store and retrieve information is enormous for our data driven world.

 

Big Data, Big Genes: Why I-SDS Will Lead the Data Storage Race

dna-163466_1280

Over the last decade, big data has given rise to an unprecedented bounty of information. This data has, in turn, transformed the face of industries ranging from healthcare to consumer tech to retail. All this data is definitely a good thing—for designers, scientists, policy makers, and just about everyone else—but it’s led to a unique problem.

How can we store raw data that grows more unwieldy every day? According to a 2013 study, 90 percent of all the data in the world has been generated in the preceding two years alone.

While video services like Youtube are obvious major contributors to the data tsunami, there’s another huge player emerging in the game. A recent study from PLOS Biology found that genomics—the study of gene sequencing and mapping—will be on par with Youtube levels of data by 2025. In terms of data acquisition, storage, distribution, and analysis, genomics is the next big thing in big data.

It makes sense that genomics would benefit from recent breakthroughs in data acquisition. After all, cracking the code of human DNA holds the potential to tailor individual medical treatment based on a patient’s genes. Genomic medicine has the potential to replace the one size fits all approach that healthcare has often taken in the past.

Over the last decade, the acquisition of genomic data has grown exponentially. The total amount of human gene sequence data has increased by 50 percent every seven months. And that doesn’t even take into account the estimated 2.5 million plant and animal genome sequences extracted by 2025.

The biggest driver of this upward trend? Our desire to live longer and healthier lives, free from disease. The Wall Street Journal recently covered the rising trend of employers offering free and subsidized genetic testing to employees. Screening for genetic markers of obesity and certain types of cancer takes standard medical benefits to a new level. Genetic information offers new information and informs new strategies on tackling health issues; in an era where self-tracking is the new norm, we are hungry for DNA data. This isn’t just another move for companies to offer employees more wellness perks though—it could have major cost saving benefits for employers. Obesity is a huge contributor to other costly medical procedures, so better employee health also benefits the financial health of the company.

It’s safe to say we’re going to see genomic data skyrocket in the next few years. Data storage will need to adapt to be able to house this huge amount of information so we can learn from it. The solution? Intelligent Software Designed Storage (I-SDS).

I-SDS removes the need for cumbersome proprietary hardware stacks by replacing them with storage infrastructure managed and automated by intelligent software. Software, rather than hardware, will manage and automate storage solutions. Essentially, we are moving away from an outdated computational model to one that mimics how our human brains compute massive amounts of data on a daily basis.  I-SDS will be more cost efficient and provide better methods for accessing data with faster response times. Intelligent software is the next frontier for storage if we want to reap the benefits of genomic big data.

The Biggest Airplane Innovator Since the Wright Brothers

The Biggest Airplane Innovator Since the Wright Brothers

Move over aluminum—it’s time for microlattice to revolutionize aeronautical engineering. Developed by Boeing, microlattice is the world’s lightest metal, comprised of 99.99% air amidst a series of thin, hollow struts. The 3D open cellular polymer structure makes this material incredibly lightweight. So light, in fact, that it can balance atop a dandelion!

Screen Shot 2015-11-23 at 11.06.02 AM

At the same time, microlattice is impressively strong, due to its structure that mimics that of human bones: a rigid outside coupled with a hollow, open cellular interior. It’s also less brittle than bones, designed with a compressible grid structure that allows it to absorb a large amount of energy and then spring back into shape, similar to a sponge. What’s more, microlattice floats down to the ground like a feather when dropped. Surely something with such an elegant design mirroring that of the natural world has the potential to radically alter the way we construct aircrafts, cars, and more.

 

food-eggsBoeing makes this breakthrough easy to understand with a familiar scenario that we’ve all likely done in high school science class: the egg drop challenge. The usual method for dropping an egg from multiple stories involves padding it in bubble wrap and hoping for the best. With microlattice, Boeing has essentially created a structure that could closely surround the egg and absorb all of the force of impact, without a lot of bulk. So your eggs won’t get scrambled.

 

 

In real world applications, we can expect to see microlattice replacing traditional materials used to construct airplanes and rockets. Replacing even a small percentage of the aluminum commonly used in aircrafts with microlattice could lead to significant reductions in the overall weight of the aircraft. A lighter plane requires less fuel, thereby providing a huge cost-saving incentive. With fuel being the lion’s share of airline operating costs, reducing the amount of gas needed would provide a huge cost saving measure that would trickle down to consumer prices with lower ticket prices. Most importantly, microlattice’s impressive strength and flexibility upon impact means that performance would not be hampered by lightening the load, but instead could enhance the overall durability and safety of aircrafts.

 

While microlattice was first invented by scientists at UC Irvine, HRL Laboratories and Caltech back in 2011, it’s just now coming into viable applications through Boeing’s further development. This isn’t the first time that Boeing has revolutionized aircraft engineering. With the design of the 787 Dreamliner that made its debut in 2008, Boeing introduced the first plane whose fuselage was made of one=piece composite barrel sections instead of aluminum panels. Combined with new carbon fiber materials, the 787 became the most fuel efficient plane in its class.

 

Imagine how Boeing’s ongoing innovations, coupled with microlattice, will change the aerospace game even more. With panels or sidewalls made of microlattice, commercial jets would be lighter, stronger, and more fuel efficient. It’s only a matter of time until we see this amazing new wonder material taking to the skies, and it’s likely that other earthbound applications will be discovered as well. For microlattice, the sky’s the limit.

10 Ways Graphene Will Change the World

10 Ways Graphene Will Change the World

Graphene is an amazingly strong, thin, and versatile “wonder material” that has led to over 25,000 patents since its creation in 2003. Praised by scientists for being a single layer of graphite atoms with amazing strength and conductivity, investors are just as impressed with graphene’s potentially limitless applications. Think of all the ways plastic changed the world when it was first invented in TK; now it’s graphene’s time to shine. Here are 10 major ways graphene will change the world as we know it.

1. Batteries

Combining two layers of graphene with one layer of electrolyte could be the key to getting us in battery-free electric cars within the next five years. By replacing the cumbersome and costly car battery with a graphene powered supercapacitor, scientists may have hit on the answer to the stunted growth of electric cars. Supercapacitors could lead to faster vehicle acceleration and speedy charging. Combined with the fact that they’re also smaller, lighter, and stronger than today’s electric batteries batteries, it’s clear that graphene will reshape the auto industry in coming years.

2. Healthcare

Graphene based materials have been favorably received in the biomedical field. Ongoing research into applying graphene’s unique physiochemical properties to healthcare is positioning the nanomaterial to improve treatments in a variety of ways. From stimulating nerve regeneration to being used in cancer treatment via photo-thermal therapy, graphene could change the way we heal.

3. Lighting

Combining an atomically thin graphene filament with a computer chip led scientists earlier this year to create the world’s thinnest light bulb. This is a huge feat, as light bulbs have never been able to combine with computer chips because the high heat needed to produce light has damaged the chips. However, graphene’s unique property of becoming a bad conductor at high heats allows it to transmit light without damaging the attached chip. This is going to be a huge game changer not only in home lighting, but also in smartphones and computers, where graphene will provide a faster, cheaper, more energy efficient and compact method of processing information. Let there be light!

4. Green Energy

Graphene allows positively charged hydrogen atoms or protons to pass through it despite being completely impermeable to all other gases, including hydrogen itself. This could make fuel cells dramatically more efficient . It also could allow for hydrogen fuel to be extracted from the air and burned as a carbon-free energy source. This source of water and electricity would, incredibly, produce no damaging waste products.

5. Sports Equipment

From super strong tennis racquets to racing skis, graphene has limitless potential to improve the strength and flexibility of sports equipment. It has already been utilized in cycling helmets that are super strong and lightweight. By using graphene as a composite material to strengthen traditional sports equipment, new hybrids are hitting the market for athletes to achieve the competitive advantage.

6. Bionic Materials

While this may sound like a plot from a Spiderman movie, researchers have successfully transmitted graphene onto spiders, who spun a web incorporating the nanomaterial. The result? Webs with silk 3.5 times stronger than the spiders’ natural silk—which is already among the strongest natural materials in the world. This discovery could lead to the creation of incredibly strong bionic materials that could revolutionize building and construction methods.

7. Tech Displays

Most of today’s tablets and smartphones are made of indium tin oxide, which is expensive and inflexible. Graphene is set to replace this as a thin, flexible display material for screens. This could also be a huge breakthrough for wearable tech, where flexibility is even more important.

8. Manufacturing Electronics

The recent application of graphene based inks will fuel breakthroughs in high-speed manufacturing of printed electronics. Graphene’s optical transparency and electrical conductivity make it much more appealing than traditional ink components. Thanks to its flexibility, future electronics might be able to be printed in unexpected shapes.

9. Cooling Electronics

White graphene—or graphene in a 3-D hexagonal lattice structure—could hold the key to keeping electronics from overheating. By providing better heat distribution and flow than current electronic materials used in smartphones and tablets, graphene will keep the future cool.

10. Better Body Armor

By now you know how thin, strong, and flexible graphene is. What’s more, graphene is also great at absorbing sudden impact. Researchers have found it to be 10x stronger than steel at dissipating kinetic energy, like that given off when a bullet strikes body armor. This could revolutionize soldiers’ armor, because of graphene’s unprecedented ability to distribute the impact over a large area. Researchers have also proposed using it in this way as a covering on spacecrafts to mitigate damage from orbital debris. That’s one tough nanomaterial!

 

This article was originally published on graphene-investors.com

 

How Mobile Wallet Apps are Reshaping the Ways We Pay

How Mobile Wallet Apps are Reshaping the Ways We Pay

The recent explosion of mobile payment apps could signal the end of traditional wallets stuffed with credit and debit cards. By leveraging social and mobile capabilities, as well as utilizing cloud computing and SaaS models, these mobile wallets have a leg up on traditional banks, which are often slower to innovate because of stricter regulations. A recent report by Accenture predicts that unless traditional U.S. banks learn to emulate these tech disruptors, they stand to lose as much as 35% of their market share by 2020.

 

A recent study by Nielson on mobile payments found that 40% of mobile wallet users reported it their primary method of settling the bill. Demographically, users age 18-34 account for the 55% majority of active mobile payments. Mobile payments appeal across gender and income levels, too. As more mobile payment methods move from QR codes to NFC, the convenience and ease of paying via mobile wallet apps could make it the new norm. If you’re skeptical of paying for everything with your phone, rest assured that mobile payment methods are actually more secure than using your credit or debit card, because they do not use your card number. Instead, they use a randomly generated number called a token. These tokens change with every transaction, making fraud much less likely. In the future we can expect to see a huge rise in mobile biometrics as a way to further increase payment authenticity.  In the meantime, here are four mobile wallet disruptors to keep an eye on as we head toward 2016:

1. Apple Pay

Screen Shot 2015-10-23 at 12.02.46 PM

 

 

Now available in the U.S. and the U.K., Apple Pay allows iPhone 6 or iWatch users to make retail payments via Near Field Communication (NFC). With international rollout plans in the works, Apple Pay already accepted at over 700,000 locations, including major retailers like Whole Foods, Staples, Nike, Subway, Macy’s, and of course, Apple. You can even pay for entry into U.S. national parks with Apple Pay. Apple already has deals with the four major credit card providers, and Discover recently joined as well. Retail rewards cards are also in the works, which will make it simple to automatically apply rewards in one simple checkout, providing incentives that will play a big part in the rising popularity of mobile wallets.

 

2. Android Pay

Screen Shot 2015-10-23 at 12.04.24 PM

Android Pay is also NFC enabled, and allows you to quickly pay with your default card at locations with NFC enabled checkouts. It’s currently not linked with any apps, as the Apple Pay is, but Google says they’re working on app integration. A plus side of Android Pay is that it’s available on lots of Android phones, unlike Apple, which requires an iPhone 6 or later.

 

3. Samsung Pay

Screen Shot 2015-10-23 at 12.05.16 PM

Samsung just launched its competitor to Apple and Android Pay, and it tops them both in one major way: Galaxy users can use it to pay in more stores than any other mobile payment device. Utilizing NFC and MST (magnetic secure transfer), Samsung Pay can be used at NFC enabled checkouts, and also at regular card readers through the MST feature. It’s also compatible with EMV readers, so the recent shift to EMV in the United States will pose no hassle for Samsung users. This one’s the clear winner in terms of being accepted at the most locations.

 

4. Paypal Here

Screen Shot 2015-10-23 at 12.06.41 PM

Eager to stay relevant in a sea of rapid payment innovations, Paypal just launched its latest device in the U.S. The Paypal Here Chip Card Reader enables retailers to process Apple, Android, and Samsung Pay. Because the U.S. recently upgraded to EMV—smart cards that store data on magnetic chips instead of chips, which have been popular in other countries for a few years now—Paypal’s device comes at the perfect time. In order to comply with new liability laws that took effect October 1st, many retailers will have to upgrade their systems to be able to process these mobile payments. Now you can tap, insert, or swipe pretty much any form of payment with this handheld device. The payment reader is going for $149, with an incentive program for small business owners who can earn cash back for making $3000 of sales on the device within the first three months.

The Bottom Line

These new mobile wallet options aim to make purchases easy and painless for consumers. Retailers who don’t keep pace with the changes will suffer the loss of business as the way of the financial future becomes increasingly mobile.

 

Pros & Cons of Apple’s New iPhone Leasing Program

Pros & Cons of Apple’s New iPhone Leasing Program

Apple’s latest and greatest iteration of the iPhone launches today with the iPhone 6s Plus. With upgrades to its front and rear facing cameras, as well as a new finish in Rose Gold, the lines are already forming in cities around the world.

In addition to hardware, Apple unveiled one interesting sales technique with the launch of its iPhone Upgrade Program. Apple is calling it a financing program, but it’s essentially a lease. By enticing consumers with yearly phone upgrades replete with AppleCare and the option to choose a phone plan and provider, Apple is borrowing from luxury car makers like Mercedes and BMW.

iphone-plan-box-201509

By enticing consumers to trade up to the latest model long before their old model is remotely obsolete, these luxury companies are betting on the appeal of keeping up with the Joneses—and it’s working. From a sales standpoint, Apple’s Upgrade Program cashes in on the larger tech trend of planned obsolescence, in which the latest model makes the past iteration far less desirable.

Apple’s Upgrade Program allows users to buy a new iPhone for a low monthly payment over a two-year period, or to trade up for the latest iPhone model every 12 months by adding the a monthly payment of $32 (for the 16GB 6s, rates go up from there) to a user’s monthly phone service bill.

$32 doesn’t sound like that much, right? Instead of paying for a brand new $700 iPhone, it seems like a steal. Let’s take a look at the pros and cons of Apple’s new financing program from a consumer standpoint:

Pros

If you’re someone who utilizes the full capabilities of the iPhone for either work or pleasure, it likely makes sense for you to be consistently upgrading to the latest operating system. Spreading the cost out over a number of months can lessen the financial hit.

The program includes Apple Care, which covers hardware repairs, software support, and maybe most importantly, two cases of accidental damage. So those annoying shattered screens that come from accidentally dropping your phone are no longer an issue when you’re on the Upgrade Program… as long as you can avoid being clumsy more than twice a year.

Cons

You’re signing on to higher monthly phone payments indefinitely for reasons that some might call vain and superficial. Is Apple just enabling you to lease a lifestyle you otherwise couldn’t afford?

Putting that cynical argument aside, the bigger con here is that leasing something costs more. Waiting a few months after a product’s launch allows consumers to purchase the product at a lower price, or buy a refurbished iteration of the previous generation of product for significantly less.

In order to receive a new iPhone every 12 months under the upgrade plan, you have to trade in your current phone, meaning you can’t plan on reselling it, even if it’s in good condition.

With the required two year commitment, Apple’s upgrade option allows you to get the 16GB 6s Plus for $32.45 a month. The phone costs $649 to purchase new retail, plus an additional $129 for AppleCare. So the leasing price comes out to $778.80 over the two year commitment—only 80 cents more.

While this price difference is negligible, consumers should note that buying into the upgrade program requires signing a loan contract with Citizens Bank. To be approved for this, you’ll need a strong credit score. Upgrading to a new phone every 12 months restarts your two year contract, locking you into a permanent rental state.

That’s on top of cell phone bills, which average over $100 a month on major US plans.

Why not just take advantage of the given upgrade that comes with most common cell phone plans every two years? Since phone cost is often built into a cell carrier service plan and bundled with a lower down payment, buying into the Apple Upgrade Program could mean you’re essentially paying twice for upgrades and service that should be built into a decent carrier plan.

The bottom line

Whether or not Apple’s new financing plan makes sense for you comes down to personal preference, much like the choice between buying and leasing a car. Since millennials are accustomed to paying monthly fees—renting where their parents’ generation owned—Apple is sure to find plenty of iPhone users who will buy into constant upgrades and the illusion of lower costs.

You just have to decide which is worth more: Value or style.

Photo credits: Flickr/Irudayam; Apple

*This article originally appeared on The Next Web. Check out my author page here.

Why Google Glass Broke – And How It’s Fixing Your Doctor’s Office

Why Google Glass Broke – And How It’s Fixing Your Doctor’s Office

The journey of Google Glass can teach any entrepreneur valuable lessons about brand strategy. From highly anticipated technology breakthrough, to famous retail flop criticized for its appearance, to its recent and more promising reincarnation as a business tool, Glass has seen a lot of action in its two-year life span. In a market exploding with wearable technology, how did a hands-free computer from one of the biggest tech companies on the planet flop so spectacularly?

8759961868_6581b7b16a_k

From a marketing perspective, Google made some interesting choices in launching this amazing device that may have worked against it:

No official product launch: Glass wanted to be seen as hip from the get-go, so prototypes were given to early adopters and celebrities, in the hopes that the mystique would drive consumers to happily shell out $1500 for the hot new tech-cessory. This may have been the case, but consumers were never tuned in to an actual product release date—or where they could purchase the product. Google should’ve taken a page from Apple’s playbook in creating buzz about new products with a well-known release date.

No clear brand messaging: The amazing potential of Google Glass got muddled somewhere between celebrities wearing them and consumers not knowing exactly why they needed them. The device looked sci-fi at best (or geeky at worst) and Glass’ myriad features were lost amid criticism of the frames’ appearance. Essentially, the capabilities of the product were lost amidst the noise. Instead, Google could have marketed their product’s amazing features more effectively through a clear advertising campaign.

surgery-688380_1280

  

Google Glass is now under the leadership of Tony Fadell, whose track record as Nest CEO and Apple product designer are good omens for Glass’ reincarnation. Fadell’s team took Glass’ initial failure as an opportunity to pivot the product away from a consumer market to other industries where the complex technology would be more relevant, including the doctor’s office. This summer Google issued a quiet relaunch of Glass, not as a trendy wearable device, but as a business tool equipped to save lives in the emergency room.

 

So what can entrepreneurs learn from Google Glass’ about-face?

 

Turn setbacks into opportunities: As a tool used exclusively in business settings, Google has found a way around the initial issue of privacy. Consumers were not happy with Glass’ ability to discreetly record video in public places. The new iteration of Glass will be used in business settings for internal video transmission. Picture a doctor live streaming a surgery to colleagues and medical students, or a technical engineer in the field receiving live feedback from colleagues in the office. In these cases, live-stream video will be an invaluable tool.
Learn from Criticism: Fadell has been tasked with making Glass more user-friendly and attractive. Reported updates include making the device waterproof, foldable, and equipped with a better battery. If a consumer version is relaunched in the future, Glass will likely take into account its many aesthetic criticisms, too.
Target the right audience: While it didn’t work for a consumer market, Glass has found a new home with enormous potential in the medical, manufacturing, and energy fields. According to research firm Gartner, the market for head-mounted displays is expected to reach a cumulative 25 million units by 2018. The lesson here is that sometimes what begins as a B2C product evolves into B2B applications.  

 

From its not-so-humble beginnings as a celebrity accessory to its quieter success as a lifesaving tool in the ER, Google Glass has had an interesting journey so far, with more pivots likely to come as the product continues to evolve.

 

Photo credits: Flickr/Erica Joy; Pixabay

 

Warren Buffett’s $32B Bet on the Aerospace Industry

Warren Buffett’s $32B Bet on the Aerospace Industry

According to the industry group Airlines for America, 14.2 million people are expected to travel during the 2015 Labor holiday weekend. With that number steadily on the rise, air travel is booming and has just piqued business titan Warren Buffett’s interests.

 

Buffett’s illustrious holding company, Berkshire Hathaway Inc., recently acquired Precision Castparts in an estimated $32 billion dollar deal, which is said to be the company’s largest merger to date.  Berkshire Hathaway Inc. is reported to have paid $235 per share in cash for the company, which makes metal equipment for the aerospace industry.  The merger is reportedly expected to close in the first quarter of 2016.

 

Justifying his interest in Precision Castparts to the New York Times, Buffett said, “It is the supplier of choice for the world’s aerospace industry, one of the largest sources of American exports.” With the improved economy and steady increase in air travel, Buffett’s interest is relevant. As long as air travel is on the rise, the airline industry will need more planes, which will inevitably need more parts.

10311228024_c140af1e9a_b

The merger further inches Berkshire Hathaway into the industrial sector along with other industrial acquisitions such as Marmon, an industrial manufacturer, and the chemical maker Lubrizol. Berkshire Hathaway, which is worth an estimated $62.6 billion, has a diverse portfolio of clients that include Heinz in the food sector, Burlington Northern Santa Fe in the railroad sector, General Re in the insurance sector and Fruit of the Loom in the retail sector, amongst others.

Buffett, often referred to as the Oracle of Omaha, isn’t exactly known for being a trend or momentum investor. Instead, he focuses on companies with longevity, who are at the forefront of their industries, and generate a large amount of revenue. Buffett isn’t the type to buy and sell often; he’s held stock for over 50 years.

 

Buffett made the offer at the annual Allen+Company conference with PCP Chairman and Chief Executive Mark Donegan. Buffett reportedly became aware of the company through investment manager Todd Combs’ stock in it.

 

The Portland, Oregon-based Precision Castparts was established in 1949 and makes turbine airfoils, valves, fasteners, and other products used in the defense, gas, energy and aerospace industries. They reportedly have an annual revenue of $10 billion and are used by airline giants such as Boeing and Airbus. The question to ponder is whether we should all shoot for the stars as Buffett has and invest in the aerospace industry? Buffett has profited from non-traditional moves before; after all, when the economic crises happened in 2008 Buffett made major investments in both Bank of America and Goldman Sachs. It will be interesting to see how Berkshire Hathaway’s largest investment yet compares with the rest of its diverse portfolio.

 

Graphene is White Hot in the Next Dimension

Graphene is White Hot in the Next Dimension

The wonder material graphene has recently tackled another dimension and found another exciting application for the future of technology. If your phone has ever overheated on a hot day, you’re going to want to read this.

Hexagonal boron nitride  (h-BN), a similar element to graphene known as white graphene, is an electrical insulator. Normally a 2-D material, in a newly proposed, complex 3D lattice structure, white graphene has serious heat withstanding capabilities. In most materials used to create electronic devices, heat moves along a plane, rather than moving between layers to dissipate more evenly, which frequently results in overheating. This is also the case with 2-D hexagonal boron-nitride, but not the case when this same element is simulated in a 3-D structure.

Rouzbeh Shahsavari and Navid Sakhavand, research scientists at Rice University, have just completed a theoretical analysis that produced a 3-D lattice-like white graphene structure. It uses hexagonal boron nitride and boron nitride nanotubes to create a configuration in which heat photons move in multiple directions—not only over planes, but across and through them as well. This means that electrical engineers now have the opportunity to move heat through and away from key components in electronic devices, which opens the door to significant cooling opportunities for many of the electronic items we use daily, from cell phones to massive data server storage facilities.

In an interview with Fortune, Shahsavari clarifies this process even further in an explanation about 3-D thermal–management systems. Essentially, the shape of the material, and its mass from one point to another, can actually shift the direction of the heat’s movement. Even when the heat is more inclined to go in one direction, a switch is created that shifts the direction in reverse, better distributing heat through the object.  The boron-nitride nanotubes are what enable the transfer between these layers to occur.

For most of us, this just means that in the near future we may be able to worry less about our smartphones and tablets overheating. For engineers it may mean an entirely new approach to cooling through the use of white graphene, which could potentially provide a better or alternative solution to cooling mechanisms like nanofluids. Those interested in an incredibly complex scientific explanation can read more about how this dimensional crossover works here or here.

photo credit: Shahsavari Group/Rice University

Tesla Trailblazes New Frontiers in Solar Power

Tesla Trailblazes New Frontiers in Solar Power

In 1953 Charles Wilson, former president of General Motors, remarked “as General Motors goes, so goes the nation.” This wasn’t arrogance talking, but facts: GM was the largest corporation in America, employing over 850,000 workers worldwide and capturing 54% of the U.S. auto market. The Detroit based behemoth was on the cutting edge of several automotive innovations, including the development of the first V8 engine and the first application of air conditioning in a car. A year after Charles Wilson made that remark, General Motors produced its 50 millionth car and their U.S. market shares reached 54%. Fast forward to GM’s faulty ignition switch scandal and their bankruptcy fall from grace, and we’ve seen a decoupling of General Motors and the nation at large. On the upside, this shift has allowed other innovators to enter the playing field. Chief among them is Tesla Motors, led by the entrepreneur extraordinaire Elon Musk.

 

Tesla Motors doesn’t operate on thScreen Shot 2015-07-30 at 11.22.57 AMe same scale that GM once did, but the company has leveraged its electric vehicle initiatives to spur change across the industry. For instance, in 2014 Musk announced that its technology patents could be used by anyone in good faith to speed the development of electric cars. Tesla’s latest foray into selling solar powered batteries indicates a new alliance forming between our country’s future and an automaker—one that wants the public to think of it not just as a car company, but as an “energy innovation company.” The Tesla Powerwall, which was launched this spring and is already sold out through 2016, is a rechargeable lithium-ion battery designed to store energy at a residential level for load shifting, backup power and self-consumption of solar power generation.

 

Tesla’s latest product development signals a growing focus on renewable energy, centered around resurgent solar power. Given Elon Musk’s diverse entrepreneurial background—not to mention the fact that he’s also the Chairman of SolarCity, America’s second largest solar provider—Tesla’s move from the auto industry to the energy sector makes sense. The hope for the Powerwall battery is that it helps us move off the grid with clean energy, using the sun’s power even when it’s not shining.

 

Screen Shot 2015-07-30 at 11.05.01 AMAnalysts at GSV Capital are predicting Tesla’s move into the solar battery industry to be a watershed moment because it captures these 5 key trends driving global renewable energy:

 

1. Abundance: Solar energy is starting to look like a cheaper, more viable alternative to fossil fuels.

2. Storage: Batteries continue to get cheaper and better, proving the biggest criticism of solar power—that it’s unreliable—wrong.

3. Distribution: The PowerWall allows consumers not just to buy and use batteries, but to produce and store energy for future use.

4. Intelligence: Energy tech is starting to get the same treatment as every other digitized, highly intelligent aspect of our lives. Algorithms are starting to create an “energy internet.”

5. Financing: New financing sources are emerging to promote clean tech with incentives for consumers and businesses adopting greener consumption habits. Fast Company has covered the PowerWall by the numbers extensively.

 

Solar energy currently only accounts for half of a percent of the world’s total energy consumption, but the innovations signaled by Tesla, along with those 5 trends that solar energy companies are starting to tap into, is an exciting indicator that the future of renewable energy will be shining brightly, 24 hours a day.

 

Images via Tesla

Infographic via Strom-Report

Startup Bracket Raises $85 Million to Rewrite the Cloud

Startup Bracket Raises $85 Million to Rewrite the Cloud

It’s an exciting time for the cloud computing industry. Nasdaq reported that cloud services grew by 60% last year, and according to experts, the next five years will continue to see exponential growth. But this monumental growth and market transformation does not come without risks. The increasing reliance on the cloud for storage and computing power means sending sensitive data between data centers, which exposes it to more potential points of infiltration. And due to the overlapping nature of many cloud services, once hackers get inside of a network, their reach can be vast. So before the champagne is popped, vulnerabilities must be addressed.

 

New security horror stories happen all the time now. An international hacking ring hacked 100 banks in 30 countries and stole $1 billion dollars. Hackers gained data on 70 million people when Anthem, a prominent health insurer was hacked. Home Depot was recently targeted, and hackers took credit card information for more than 50 million people. Once a hack like this happens, the damage can be devastating. Not only does Home Depot’s reputation suffer, but a hack can stop new digital initiatives in their tracks. Plus the customers have to go through the tedious task of calling their banks and re-issuing their credit cards, in order to prevent fraudulent purchases in the future. The road to recovery can be very long.

 

It’s not only companies, governments are also at risk. China is accused of being behind a recent hack on the United States Federal Government which gave them access to information on 18 million federal employees. Even America’s oldest pastime isn’t safe, with baseball teams getting hacked these days. The lesson here is that with every convenience there is a trade-off. Having access to powerful systems that exist on nearly-perfectly reliable servers has eliminated the problem of localization, downloading programs, and losing data when a computer crashes for good. At the same time, these massive databases present an attractive target for hackers and criminals who understand that gaining access to even a part of a database means an ocean of valuable information.

 

18495846450_3c5a725e2f_o

Most companies of Home Depot’s scale use highly-protected enterprise data services provided by companies like Cisco or Oracle, who are leaders in cloud services and have (generally-speaking) very secure offerings. As a result, these services are very expensive, and large companies are compelled to continue to use them because other public cloud offerings are not viewed as being secure enough. One Silicon Valley based company, Bracket Computing, has found a way to secure public cloud services enough to be able to handle sensitive corporate data.

 

In a nutshell, Bracket uses encryption wrapping to protect a company’s corporate applications, without making them harder to use. The encryption happens before the data is sent to the remote servers, and the customer is the only ones with the encryption keys, which limits exposure and vulnerabilities to a point where very sensitive customer information can be transferred and handled with a higher degree of confidence. Investors are already confident about it too: Bracket recently raised $85 million in funding from investors like Qualcomm and GE, to roll out their hyperscale cloud security solution.

 

No computer system will ever be 100% secure, but by bringing enterprise-level security to public cloud services, at least more companies should be able to confidently harness the advantages of the cloud while losing as little sleep as possible.

 

Images via ZoeeyPerspecsys

Google Street View Scales New Heights with El Capitan

Google Street View Scales New Heights with El Capitan

Only a small handful of elite climbers in the world have ever scaled the 3,000 foot vertical face of El Capitan, the most prestigious rock climbing destination in the United States. Call it a rite of passage in a sport where finding footholds in a smooth granite face is the only way to reach the top. It’s a sport reliant on carabiners and ropes—not exactly high tech stuff. Which is why Google’s recent launch of its street view technology on the face of El Capitan is so unexpected, giving non climbers a chance to virtually scale the beautiful granite monolith in Yosemite Valley.

 

So how did Google recreate its Street View feature on a 3,000 foot rock face? It’s not like a car with a camera mounted on top could just drive up the nearly vertical ascent of El Capitan. Instead, the Google Street View team solicited the help of three expert climbers, Tommy Caldwell, Lynn Hill, and Alex Honnold, who worked together to mount a tripod camera with ropes, pulleys and anchors onto the rock face at 23 different intervals along their climb. To put El Capitan’s scale in perspective, picture three Empire State Buildings stacked end to end.

 

No stranger to “El Cap,” as the rock face is affectionately called amongst climbers, Tommy Caldwell was one of two climbers to make history with the first successful free climb up the legendary Dawn Wall route of El Capitan this past January. Ascending 3,000 feet over 19 days, Caldwell and Jorgeson did not use ropes to help pull themselves up, but only to catch them if they fell. Of attempting a feat most called impossible, Caldwell said, “I love to dream big, and I love to find ways to be a bit of an explorer. These days it seems like everything is padded and comes with warning labels. This just lights a fire under me, and that’s a really exciting way to live.” Now anyone with an internet connection can get in on the excitement.

 

Getting back to the Google Street view climb: mounted by the three climbers, the camera took multiple shots from each interval, and the photos were seamlessly stitched together to create a 360 degree high definition panorama. We’ve come to know and love this seamless photo stitching technology for checking out unfamiliar neighborhoods and street views on Google maps, but instead of cars, pedestrians, and corner delis, the stunning views produced of El Capitan depict Yosemite Valley at its most stunning: sweeping views of the glacial valley from a vantage point that few have ever experienced.  For a particularly challenging stretch of the climb called the Nose route, one of the climbers carried the camera equipment on his back. His pack featured a custom rig with six small cameras angled in different directions, which automatically fired every few seconds.

 

600x753x2

For lovers of the preserved wilderness of Yosemite Valley, this is the biggest homage to its natural beauty since Ansel Adam’s famous series of black and white photos, which he began shooting in the 1920s and continued to capture for several decades. At right is his photograph of El Capitan at sunrise, shot in 1952.

 

While this is the first time Google has applied its street view technology to a climbing route, they’ve been actively moving in the direction of mapping the world, off road style. They are off to an auspicious start that would make Ansel Adams proud.

 

Top Image: Flickr/Peter Liu Photography

Graphene Helps Create World’s Thinnest Lightbulb

Graphene Helps Create World’s Thinnest Lightbulb

A group of scientists from the U.S. and South Korea recently published that they have demonstrated an on-chip visible light using graphene–a first of its kind. The group, led by postdoctoral research scientist Young Duck Kim, connected small graphene strips to electrodes. From there, the strips were placed above a substrate as they passed light through filaments, heating the strips. Kim is joined by a team that includes his James Hone-group at Columbia University as well as Seoul National University and Korea Research Institute of Standards and Science.

The full findings can be found in the group’s report, Bright Visible Light Emission from Graphene.

James Hone went on to elaborate to Phys.org how the new findings could lead to paving the way for, “atomically thin, flexible, and transparent displays, and graphene-based on-chip optical communications.” Hone attributed those potential technological advancements to what they consider a “broadband light emitter.”

Pardon the pun but the future looks bright in this sector as this helps bridge the gap in developing light circuits that emulate a semiconductor’s electric currents. By having graphene taking over the role of the filament, the team should be able to put the incandescent light onto a chip. This has been impossible to this point due to the filament’s inability to reach the needed temperatures–above 2500 degrees Celsius–to visibly glow. By having the graphene in that role, not only is the temperature issue eradicated, the likelihood of damaging a chip is reduced when using the carbon.

The group’s work continues as they try to advance the field further. At this time, their efforts are focused on characterizing device performance to determine ideal integration techniques. Hone further added, “We are just starting to dream about other uses for these structures—for example, as micro-hotplates that can be heated to thousands of degrees in a fraction of a second to study high-temperature chemical reactions or catalysis.”

2015 is shaping up to be a year for advancements in graphene research. What remains to be seen is if graphene turns out to become the material that revolutionizes several facets of innovation. It certainly is shaping up to become that sort of material. However, other “super materials” could eventually become the preferred material in nanoscale electronics. These transition-metal dichalcogenides (TMDCs) have a significant advantage over graphene in that many of the TMDCs are semiconductors. Another reality that is probably is that several 2D materials will serve as the primary materials to work with, as each has its own properties.

Regardless, graphene holds supremacy at the moment. Because of this, we should expect to see more innovation coming throughout the weeks and months. With this latest finding, graphene further entrenches itself in the fabric of innovation in the modern era. The work conducted by Kim, Hone, their team and countless other researchers could allow for significant growth in the coming years.

Image: Flickr/University of Exeter

 

Is 2015 The Year of the Graphene?

11925595493_026e8716ae_b

Move over, goats. It seems that 2015 could be graphene’s watershed moment. No longer is the thin, durable and conductive material newest nanotech flavor of the month; graphene means business.

Over the past few months, we’ve seen a plethora of stories both in scientific and mainstream media on the latest developments with graphene, ranging from the practical (long-lasting lightbulbs and efficient batteries) to the awe-inspiring (holographic projections and solar sails). Even more exciting? 2015 isn’t even halfway over.

Here are just a handful of the discoveries made this winter and spring:

February

Stronger Metal

Researchers at the Korean Advanced Institute of Science and Technology combine graphene with copper and nickel to strengthen these metals by 500 and 180 times, respectively. Using chemical vapor deposition, these researchers were able to create ultra-tough composite materials with a vast array of practical applications. Even more interesting, this was accomplished through the addition of .00004% graphene by weight to the resulting compound. Link

Flexible Electronics

Thanks to researchers at the Universities of Manchester and Sheffield, we may soon have flexible LED screens only 10 to 40 atoms thick. With a combination of graphene and 2D crystals, these scientists created a heterostructural LED device that emits light, flexes easily, and exhibits incredible toughness and semi-transparency. Link

March

Efficient Water Filters and Fuel Cells

At Northwestern University, Franz Geiger found that imperfect, or porous, graphene allows water and hydrogen-based solutions to traverse the material’s surface in highly efficient and controllable ways. Depending on the size of the perforation, anything from protons in energy transfers to water molecules can pass through a porous layer of graphene. This opens up considerable possibilities for clean tech, filtration and other functions. Link

Easier Graphene Manufacturing

Caltech scientists figure out how to manufacture graphene at room-temperature, overcoming a major hurdle to scalable production of the material. With a nod to a 1960s-era process that generates hydrogen plasma, Caltech’s David Boyd found that this technique removed both copper oxide and graphene from heated copper foil that was exposed to methane. Implemented on a grand scale, this could make graphene production far more cost-effective than previously believed. Link

Light Bulbs

It should come as no surprise that many of the biggest graphene advancements stem from the University of Manchester, the same place the material was originally discovered in 2004. Among the school’s most recent discoveries: a lightbulb with a graphene-coated filament that both lasts longer and trims energy waste by 10%. Projected to be commercially available within a year, these bulbs will likely cost less than other LED bulbs on the market. Link

April

3D Holographic Screens

With little more than an ultra-thin graphene oxide mesh and lasers, a group of international researchers were able to create a floating hologram. By projecting lasers onto the flexible mesh, an array of nanoscale pixels bent the light to display various optical effects. Star Wars-type communications may not be far off. Link

Electric Ink for 3D Printers

Multiple researchers debuted graphene-based 3D printing materials this spring. Among the more interesting, scientists at Lawrence Livermore National Laboratory and Northwestern University introduced processes that allow for the remote production of aerogels and biocompatible polymers, respectively. Link Link

Highlights from NAB 2015

Highlights from NAB 2015

As many of you already know NAB just wrapped up this past week in Las Vegas. NAB, or the National Association of Broadcasters, is a yearly convention where the biggest names in video related software come together in Las Vegas to show off their new products.

Every year I walk away from NAB with a feeling of excitement while reflecting upon the products I believe will change the fields we work in. This year, I can think of 3 products that really stood out above the rest. read more…

The 4k Phenomenon

The 4k Phenomenon

It seems that words like HD, 1080p or Blu-ray have just recently become a household term when talking about video. In fact about 96.7 percent of Americans own an HD-TV. But since technology never stops improving, the industry is constantly evolving and has already produced a new, better form of video: 4k Ultra High Definition (UHD).

What is 4k UHD?

4k UHD is defined as a video resolution of at least 3,840 x 2,160 pixels. Considering that our current HD videos peak at 1080p, you can see that the resolution has almost quadrupled and is clearly the next big step in home entertainment. 4k TVs are already on the market, ranging from a 39” TV for $500 all the way up to an 85” for $40,000. Click here for more prices. read more…

Don Basile and Crew’s Top 3 Reasons to Buy an Apple Watch

Don Basile and Crew’s Top 3 Reasons to Buy an Apple Watch

With Apple’s next conference (assumedly announcing the Apple Watch) getting closer and closer, rumors have begun to spread about what capabilities the Apple Watch will include. After doing a bit of research, we’ve come up with our top 3 reasons we want to buy one (and you should too)!

1. It’s a Fashion Statement
The first thing that we need to get out of the way is that the Apple Watch is as much a fashion statement as it is a tool. With options ranging from a less expensive fun and colorful watch that your friends will love to something a bit more expensive and professional, like the stainless steel version that will look great with your next business suit, you’re bound to be at the top of your style game. Heck, they’re even offering an 18k gold watch for all you high rollers out there! Our point is this, there will be a version of the watch for everyone to wear, and there are plenty of options to fit your unique personality. read more…

Net Neutrality, What Is It And Why Should I Care?

Net Neutrality, What Is It And Why Should I Care?

The fact is Net Neutrality is something that will drastically effect every one of us and if we don’t educate ourselves and voice our opinions on the topic, we could see drastic changes in the way we are able to use the internet today. So what is it? These videos, starting with this one by The Verge give fantastic explanations of exactly what it is, what the FCC trying to accomplish and why people are either pro or against Net Neutrality.

read more…

The Rise of the Personal Drone

The Rise of the Personal Drone

Every once in a while a piece of technology is released to the public that changes the world. Over the past two years a new product has been developed that seems to once again be doing just that. Introducing: the personal drone. Okay, so most of you have already heard of them. A lot of you have probably seen or even used one. But for those of us who haven’t (or maybe need a refresher) let’s take a look at what they are and what they’re capable of doing.

Personal Drones and Their Many Uses

Personal Drone Don Basile Min

The personal drone, also known as an unmanned aerial vehicle (UAV), is essentially the RC helicopter’s older, much more advanced brother. The basics, like the ability to fly these helicopters around using a remote control are still in place, however recent advances in technologies like gyros, batteries, cameras, GPS and more have turned the old toy helicopters into something much more evolved. Out of nowhere what was once considered by many a hobbyist’s toy has now become an aerial tool! read more…

Apple and IBM Working Together?!

Apple and IBM Working Together?!

That’s right, you probably never thought you would see a headline like that but a lot has changed over the years. Back in the late 80s Apple and IBM were in a constant battle to prove whose product was better. While IBM focused more on the corporate side of things, Apple worked on revolutionizing the industry by creating a better, easier user experience.

Now both companies have decided that they need to look at things in a new light. Instead of seeing each other as competition, they have decided to work together to provide the user with an even better product than either one could offer on their own. On Dec. 10 2014, the companies released the first product formed as a result of their new partnership! read more…

Is Warren Buffett’s Duracell purchase the right move?

Is Warren Buffett’s Duracell purchase the right move?

Last week it was announced that Warren Buffett’s holding company, Berkshire Hathaway, plans to buy Duracell from Procter & Gamble. Buffett’s firm will pay around $5 billion for the company which raises the question… Is it worth it?

According to P&G, Duracell has 25 percent of the global market using its product. But this alone doesn’t make it a great investment. In fact, Duracell has been underperforming, compared to past performance in the last ___ years. More likely than not, Duracell appeals to Buffett because it owns and sells a product that consumers need to purchase again and again.

That’s where the problem arises. More and more products are being sold with rechargeable batteries built in. The “batteries not included” label isn’t as common as it used to be. Just look at your cellphone, cameras or laptops. As technology progresses, the majority of products don’t require separate disposable batteries.

batteryDuracell needs to make a change because the age of the disposable battery is slowly coming to a close. This is where Buffett can help the company. With the already-established name they’ve made for themselves, Duracell now needs to refocus and look into new opportunities. For example, they have already been developing lithium-ion batteries, giving them the upper hand in the rechargeable battery market. Focusing on that technology and really pushing it to the next level could be huge for them.

So back to the question, will the deal be worth it to Warren Buffett? In my opinion, yes, as long as Buffett has a plan to change things within the company and evolve with the technology. Will it happen? We don’t know for sure, but we do know that there is potential and lots of hard workers on both Duracell and Warren Buffett’s teams. Give it a little time and I’m sure we’ll see Duracell back on top in the long run.