2036 A.D.

Historians agree we lost our freedom in 2018. We entered a hallway, the door locked behind us – no way back. Actually, my first sentence is inaccurate: there are no historians left, at least not in the traditional sense of people looking back and trying to understand what happened and why, unraveling a thread to the present. Rather, a wave of what we used to call “fake news” took over – people, groups, companies, countries pushing their version of truth, discrediting rival versions and promoting a picture of the past that fit their needs. Actually, this sentence is also incorrect: people, groups and countries didn’t do this, but rather computers, as the late second decade of the twenty-first century drove big data, deep learning and artificial intelligence (AI) into our lives, impacting historical analysis as well of most day-to-day activity.

The Age of Surveillance

2018 was when we first realized we had ceded our private selves, our internal lives, to constant surveillance. First, it was surveillance of our cities (via webcams, sky and ground-level drones) – with every street, alleyway, gutter and rooftop constantly on camera – then surveillance of our homes, workplaces, schools, even bodies (subcutaneous ‘health monitors’ were required by most insurance companies around 2025). In fact, by the end of the second decade it was impossible to find a spot not on camera and in the cloud. Even our most remote and sacrosanct locations – the top of Mount Everest, the rock gardens of Kyoto – were surveilled, broadcast, assessed and marketed.

But perhaps most surprisingly, it was our unabashed support of the trend: we willingly offered up our privacy, tossing our most intimate thoughts and moments onto the cloud and into the electronic aether. In fact, we obsessively life-casted, posting virtual bread crumbs about everything from birth to death, and asked the world to tune-in as we kept the show rolling via a profusion of electronic apparatus, including mobile phones, wearables, augmented reality (AR) and virtual reality (VR) glasses and visors, connected clothing and body sensors. By the third decade, most of us had created “body area networks” or BANs (similar to a local or wide-area network, LANs and WANs, in twentieth century computer terminology) to keep us constantly connected and broadcast, even in our sleep.

The Power of the Logos

By tapping into this intense human desire to be seen and heard – and equally intense need to watch ourselves being seen and heard – it soon became impossible to find a human off the ‘grid’ – now popularly known as the Logos (a reboot of the ancient Stoic term which referred to a unifying aether or substance that dominated all life with its rational presence, and a portmanteau for the ‘logical operating system’ of life). By 2030, you couldn’t be off the Logos for the simple reason that it made life impossible – i.e., you couldn’t get a job or a loan; couldn’t buy a house or insurance, couldn’t get into a decent school or decent healthcare; couldn’t buy groceries or order a cup of coffee.

Of course, there were those who tried. In the early 2020s a Neo-Luddism movement tried to turn back the tide (basically a modernized version of the early nineteenth century Luddites, who attacked the industrial machinery threatening their livelihood at cotton and woolen mills) – by advocating a slower, technology-free existence, and protesting the rapid proliferation of consumer technology, bioengineering and AI. But alas, their ultimate failure was attributed to a lack of awareness, caused ironically by an inability to market to the connected. Nobody knew they existed.

The Citizen Index

And so the Logos, and the way we accessed and contributed to it, became so central to how we lived and governed, that beginning in the third decade each citizen was provided a scorenow known as the Citizen Index (CI) – that reflected how effective and compliant they were in the use of the network (for example, using the correct, state-sponsored applications); and how influential they were on the virtual community (for example, the number of “followers” they attracted or the amount of highly-rated content they contributed to the Logos).

The higher your score, the better: for example, loan applications, tax rates, healthcare access, educational opportunities, transportation priority (remember, we long ago abandoned individual car ownership for transportation-on-demand by driverless units; these units were dispatched, and passengers charged, based on their destination and CI score) all improved with a higher score; conversely, low or non-standard usage of the network curtailed access to key services and significantly increased their cost.

In fact, it soon bordered on criminality if your score was too low. You were deemed potentially dangerous if you had a CI below a certain level, which soon led to an outcry of ‘index profiling’ – or, singling out citizens with lower scores and publicly harassing or shaming them. Example: you are walking down the street and your face is scanned by a myriad of webcams and BANs (as all faces are now scanned) and the scan shows your CI score; a police officer, seeing the low score on her AR glasses, stops you and wants to know why you are waking down the street and what you are doing. This is only one generic example, but scenarios like this are playing out in may ways and across all communities.

As last year’s #1 hit song, by the 42-year old veteran crooner Justin Bieber, declared: “Low score, low Life” (double entendre intentional and intact).

Virtual Transcends Physical

So we plunged ever deeper into the virtual, and the virtual dominated our lives (the physical world almost an afterthought . . . mute witness to our digital obsessions). Movies had tried to predict this – think 1995’s Ghost in the Shell, 1999’s The Matrix, 2015’s Ex Machina, or the recent VR remake of Blade Runner. But we never really believed that day would come. Well, it did, starting with AR, where we pasted the virtual onto the physical; then quickly moving to pure VR, where we created worlds to play in, work in, shop in and, yes, romance in.

We went from spending 5 hours per day looking at a slab of glass on the wall, to 8 hours per day looking at a slab of glass in our hands, to 24 hours a day immersed in a virtual existence (including sleep monitors, and the now ubiquitous neuro-oneiric “dream recorders”, which capture and visually display our dreams). We produced, and observed; we watched, and were watched; and we posted everything – enthusiastically, unashamedly – to the Logos.

This shift to virtual was so pervasive, and the impact on the world economy so significant, that by 2025 there was intense pressure to formally endow virtual worlds and personalities with all the rights and protections of physical communities and ‘real’ people. In fact, at one point, the U.S. Congress, harkening back to its embarrassing constitutional valuation of slaves as 3/5 of a person for purposes of state representation, considered valuing a true virtual personality (as distinct from any physical creator) with at least 5,000 verified followers and a CI score in the top 10% to 1/5 of a person; and therefore 25,000 followers with a high CI score would earn the right to full representation.

Of course, issues around virtual recognition and representation are highly complex and we have not developed workable answers to simple questions like: how do you assure that a digital personality is totally distinct from a physical or “real” person? How do you protect against manipulation by AI machines – e.g., bots that create virtual identities, then manufacture followers and drive outsized representation? Moreover, what is the meaning of a state, a geographic boundary, when we discuss virtual representation? Congress, built entirely around the concept of physical representation, is slowly wrestling with these issues, but the virtual world rockets forward – particularly in the realm of genetics.

Genetic Manipulation

What does it mean to be human? That question, studied deeply by philosophers, theologians and scientists throughout the ages, gained particular urgency in the early 2020s as we began to alter our biological identities via genetic manipulation. We’d contemplated and tinkered with this before – think Plato’s concept of an ideal “nuptial number” to produce a more enlightened (he called it ‘gold’) population; or Rome’s practice of infanticide to wean out the ‘weak’ and preserve its dominant strength; or Francis Galton’s eugenics, which drafted off Charles Darwin’s work and postulated the creation of a more perfect human – a concept used murderously by the Nazis during WWII.

But it wasn’t until the invention of the CRISPR-Cas9 genetic engineering tool in 2012 that we suddenly had the cost-effective means to actually do it. Originally discovered as an innate component of the bacterial immune system that slices apart viral DNA, CRISPR was soon used to perform genetic manipulations in other kinds of cells. The first applications were in agriculture and ranching: for example, plants were designed that didn’t require as much water to grow – a huge boon to our rapidly warming and drying world; and livestock were bred to be resistant to disease, as well as more muscular and productive – providing more output of dairy and meat for our close to 9 billion world population.


But very quickly the technology moved into the human realm, first, editing and repairing mutated genes responsible for disorders like cystic fibrosis and sickle cell disease; then, radical experiments to edit our DNA for human enhancement – for example, could we make a faster, stronger athlete? Could we make a soldier with better eyesight and more stamina? Could we make a smarter computer programmer? By 2020, over 1,000 experiments were underway in the world’s best labs to answer these questions. And by 2030 the answer was definitive and clear: Yes.

And so today, just over two decades since the discovery of CRISPR, we have transhumans amongst our midst – combinants of normal and genetically enhanced DNA – i.e., people who are stronger than unedited humans; people with better eyesight and better memories. People clearly distinguishable from the unedited. Which has also engendered a rapidly evolving caste system: normal (unedited) humans, burdened with the DNA our ancestors gave us, including the random mutations that happen to our genome every second and are so important in driving natural evolution; and trans (edited) humans, carrying DNA that was purposely altered for select advantage.

Who is more valuable to the species? Who has more (or different) rights? Who can afford the genetic edits, and who should get them? These questions are no longer science fiction. And just as we struggle with the impact of virtual, we are now struggling with the impact of transhuman production.  In fact, maybe those science fiction writers of yesteryear were more prescient than we thought – as Arthur C. Clarke dramatically postulated in Childhood’s End, we may soon have a group of Overlords, enhanced beings, benevolent in their intent, that nevertheless upend our biologic and social orders beyond imagination.

Final (or First) Thoughts

Before 2018 we were not constantly on camera: we drove our own cars, we owned our DNA, with all its maddening imperfections, our thoughts were our own – in our heads, awash in our own consciousness, with its dreams and fears and loves. But technology trends were building around us and about to alter the fundamental design and meanings of our life. Of being human and distinct. Of being alive.

Clearly, this is (and we are) to be continued . . .

– by MJB

Share this:
Like this:Like Loading... Related