Five key ideas you need to understand now if you want to be ready for the world of 2030

With a new series of the BBC science podcast Futureproofing on air this month, presenter Timandra Harkness explains how to get ready for the world of the future.

Published: April 8, 2019 at 3:03 pm

Home of the future

Just a building that keeps the rain out and your clean underwear in? Think again. The home of 2030 will be smart, connected and emotional.

“Our notion of home will change drastically,” says Sce Pike, CEO of Oregon-based IOTAS. “The notion of home is no longer four walls and a roof and a place, a location, but actually something that travels with you throughout your life.”

How does it do that? By learning your preferences and your habits, and automatically adjusting the light, heat, even your TV channels, before you have to ask. “It’s about how that home reacts to you and makes you comfortable,” says Pike.

Read more:

We will probably move house more often in 2030, from necessity and choice, but we’ll feel at home more quickly as smart technology restores our familiar surroundings and routines seamlessly. Even an overnight stay for work could pick up from where you left off, assuming your hotel or AirBnB is running the same system.

Using our homes to communicate with each other will also be useful as we all get older. We want to be independent, but our families worry about us. Pike is using technology to help with that too.

“We’ve set up a system where, if my parents don’t make coffee every day between a certain time window, then the home will just text me to call my Mom. It doesn’t have to tell me why, so it’s not violating their privacy in any way, but I know that something’s off.”

Independence may come at the cost of somebody knowing your private business, whether it’s your family, a carer, or just the artificial intelligence monitoring when you make coffee and how often you’ve opened the fridge. But if we can stay in our own home instead of moving into a home, we may feel that’s a price worth paying.

Changing Reality

First, the good news: Most of us don’t share, or even read fake news. Most false stories are seen and passed on by a small minority of social media users.

The bad news? As technology improves, it will only get harder to distinguish fakes from the real thing. Techniques used by film studios to insert younger versions of ageing, or even dead, actors into movies can also be used on politicians, celebrities, or you, creating what are known as deep fakes.

By 2030 it may be impossible to know for sure whether Barack Obama really said the Brits are all witless fops, and whether that viral video was really Prince Harry romping naked on Hampstead Heath.

Even worse, if hackers can fake video of you saying whatever they want - your passwords, perhaps - your identity will no longer be yours. Anybody could pass themselves off as you.

Technology may be taking us into a world where we’re not sure what’s real, but technology may also help us tell fact from fiction. AI startup Fabula is using machine learning algorithms to identify fakes and mistakes on the internet.

Co-founder Michael Bronstein, professor of computing at London’s Imperial College, explains what makes Fabula different from existing fact-checkers, human or automated.

“Standard approaches for fake news detection are content-based: they try to identify linguistic or semantic cues that distinguish fake from real. Our approach is radically different: we look at the way the news spread on a social network.”

True and false information moves through social networks in different ways, and the same small groups of people tend to share most of the unreliable content. Fabula ignores the content of the message and looks for familiar patterns in how it’s being shared.

“We developed a new class of machine learning methods we call geometric deep learning,” says Bronstein, which is designed for “graph-structured data such as a social network.”He envisages a trust score, like a credit reference, for content, to help us know what to believe.

But don’t relax too quickly, As Bronstein himself points out, fake news and deep fakes are much more than just a technological problem.

“Fake news is first and foremost a problem of trust,” he says. “Fake news and social networks have eroded trust in the traditional media that were unable to adapt to the new reality. Solving fake news requires rebuilding the news ecosystem, and educating people to critically think and be more responsible on the social networks.”

Sin

By 2030, technology could help you lead a better life, morally as well more physically healthy. For example, you might programme your sat-nav to choose routes home from work that don’t pass any pubs. Religious leaders could release apps that lead you away from temptation, whether that’s gambling, texting the ex, or eating the wrong food.

There are already apps like Pronto Eat that tell you which takeaway meals are allowed in your particular dietary rules, whether they’re Kosher, Halal or vegan. Others help you shop according to ethical standards for clothes or coffee.

You can download an app to help you make more ethical decisions, written by philosophers to guide you through a process of reasoning and reflection. It’s been described as a conscience in your pocket.

“Algorithms that can assist individuals with recognizing their biases and promote positive nudging could be very beneficial,” says The Venerable Tenzin Priyadarshi, Buddhist chaplain at MIT and head of their Dalai Lama Center for Ethics and Transformative Values. Though “it does presume a willingness to change on the part of the individual.”

You might also like these Science Focus Podcasts:

He also points out, however, that all this technology risks diluting our sense of agency, the acceptance of moral responsibility for our own actions on which a conscience works.

“20 years ago we didn’t have cyber-bullying, cyber-stalking, fake news led actions, morphed images, etc.” Nudging can help, but it doesn’t really address the roots of bad behaviour, he warns, “It is important for people and organizations to understand causal links and behaviours.”

It’s tempting to imagine the real world becoming a more virtuous world, with sin relegated to the virtual world. But Tenzin has warnings about that too:

“Our brain doesn’t do an excellent job in discerning between the virtual and the real. We will still be nurturing the negative dispositions of our behavioural mind by engaging in negative interactions in the VR world.”

In other words, doing sinful things, even in a virtual world with no direct consequences for anyone else, can increase our inner propensity towards non-virtuous thoughts, even if we never act on them.

So ultimately, though the forms of sinful action may change, the work of resisting temptation and living a virtuous life will probably still be a job for humans, not for algorithms.

Genome Editing

CRISPR-Cas9 was recently used to create what is claimed to be the world’s first genome-edited babies in China. This relatively new technique hijacks a natural defence system used by bacteria to fight viral infection, using molecular ‘scissors’ to cut out precise sections of DNA inside the cells of a living organism, and replace them with different versions.

Since a single error among the 30,000 or so genes in your genome can cause serious disorders, being able to find that mistake and correct it, almost as easily as using the ‘find and replace’ function in a word-processing program, has huge promise.

By 2030using CRISPR to edit the human genome will be another weapon against disease and genetic conditions.“It could potentially impact any medical condition we understand,” says genetics researcher Günes Taylor of London’s Francis Crick Institute. “If we know what has gone wrong, this is the tool we can use to fix it.”

Compared to previous tools available for tinkering with the genome, CRISPR-Cas9 is accurate, fast and cheap. And researchers are already finding ways to improve it still more. Cas9, the enzyme that cuts the DNA and splices in the replacement section, has rivals with exciting names like Cas13 and CasX.

The Genetic Revolution @ Getty Images
@ Getty Images

Using the CRISPR technique to edit the genes in your cells could treat inherited conditions or lower your risk of developing Alzheimer’s. There is even talk of being able to reverse the biological ageing process.

How far could we go in eliminating disease? Most illnesses aren’t caused by a single gene going wrong, but by a combination of many genes, environmental factors, and sheer bad luck. Some genes that predispose us to one illness turn out to simultaneously protect us from another.

Then there’s the cost. Will the benefits truly be available to all? “Affordability is complex,” says Taylor. “I’d like to think so, but it’s not the CRISPR part that’s expensive. It’s the delivery, the checking, the medical personnel cost that’ll add up. Not a couple of molecules.”

Editing the genome of a human being also raises ethical dilemmas. The use of CRISPR-Cas9 on unborn babies in China has provoked fierce debate.

Taylor hopes that doctors will be allowed to use the technique to benefit people in future, but “the finer details remain to be defined - in born people, or in embryos? Prevention or curative? Presumably different cultures will find different approaches and applications ethical. So it’s going to be a very complex future.”

Memory

Our memories are part of who we are, our unique, personal record of the past, and when we die they vanish, lost in time “like tears in rain”, as the android Roy Batty put it in the movie Blade Runner. But perhaps that’s about to change.

Ted Berger, Professor of bioengineering and neuroscience at the University of Southern California, has been working on how we encode memory as electrical signals in a part of the brain called the hippocampus.

“If we put electrodes into this part of the brain and record the activity, when we show a picture of your face to another human, we can see what the codes are,” he told me. What’s more, if he then played that electrical code back into another human brain, that person would also see my face, opening the door to recording memories for digital storage, or even transferring memories from person to person.

When I spoke to Professor Berger for Futureproofing in 2016, he had already used this technique to transfer memories between rodents, and to boost their recall of their own memories by playing the codes back into their brains. The next big step would be to use his science in humans with memory problems, perhaps because of brain injury or Alzheimer’s.

The problem is, implanting electrodes deep in a human brain is not something to be done lightly, just for research. Berger’s work with humans relies on a few people who have electrodes implanted for other reasons, mainly to treat epilepsy.

This research has shown short-term and long-term memory performance can be improved by around a third, by playing back encoded memories into the brain. Berger’s team describe this as “an important first step in the development of a neural prosthetic for memory.”

The next step is to find non-invasive ways to deliver the signal. Several commercial enterprises, from Elon Musk to a tech-billionaire-funded start up called Kernel, are looking for the engineering that will bring this technology within easy reach for more people.

But Berger knows there is more to memory than remembering. Sometimes we might prefer help to forget. “It’s worth thinking about this technology could be used to weaken some memories,” he told me. “Memories that result from traumatic events. There might be a way of discovering the memory codes for those events, and then weakening that code.”

  • Futureproofing, Radio 4’s dive into the ideas that will the shape the future, returns to this April. You can catch up on all previous episodes here.

Follow Science Focus onTwitter,Facebook, Instagramand Flipboard