SMS Blog

The Spiderweb Effect: Why We Can’t Predict Technology’s Real Impact

I’ve always been fascinated by the relationship between humans and technology. Part of it is related to looking at things in retrospect, like, wow, that printing press sure did blow the lid off everything! But it’s also fascinating to watch things in real life creep in: Wow, we started out with computers being these niche tools that were government-only, to computers infiltrating things like cash registers, to now there being so many computers everywhere I doubt anyone could accurately count the number in their own homes. Where the human-technology relationship gets really interesting is with what I call the “spiderweb effect”, which is how I define those unforeseeable consequences to a new technology. While we can’t predict the future, is there any way we can look at the human-technology relationship and try to figure some things out?

This raises all kinds of questions about what we’re experiencing with technology today. What is the best way to deal with these changes that are thrown our way? Especially in these days of AI and its super-rapid advancement, how can we handle the speed, or the rate of change? What is driving all of this core development, and even more interestingly, what drives the spiderweb of development that starts branching out from that core?

Ultimately: how can we humans handle being at a point in time in which the amount of change is so potentially radical, such as the printing press, but which is advancing at a rate which puts the ubiquity of computers to shame?

TECHNOLOGY CHANGES – THE OBVIOUS & THE NOT-SO-OBVIOUS

There are two main types of changes that accompany new technology: the obvious changes and the spiderweb, or not-so-obvious, changes. The obvious changes are the ones which are no surprise. With the printing press, it was obvious that literacy would expand, paper would need to become more readily available, and a standard of spelling would be introduced. With Generative AI, it is obvious that how documents, presentations, and websites are written is all going to change.

The spiderweb changes are the ones that branch off from those main, base, obvious changes – the ones that no one really sees coming, though they can be completely obvious in hindsight. With the printing press, yes, literacy increased, paper eventually became more readily available, and spelling standardized (sort of). But it’s doubtful anyone at the time saw the religious wars and rising European nationalism happening, thanks to the spread of vernacular languages. A more recent example emerges with the iPhone. When it debuted in 2007, with its shiny new touchscreen, no one could foresee that this would spell the demise of the paper map. It’s fascinating to consider what these spiderweb branches were, and what they caused.

Unfortunately, there’s no way to tell what those spiderweb changes will be with the new AI technology, not in an end state kind of way. However, we do have some clues as to the direction. For example, the iPhone being built with other capabilities, such as a camera, music player, or GPS, could lead one to speculate about GPS eliminating paper maps & the music player eliminating CDs. We can do the same with what we can see about how the AI technology is developing and how we could interact with it. One of the larger changes with AI technology is that very interaction. For decades, humans have interacted with their computer tools in a two-dimensional way. This forced users to translate their intent into a command the machine would accept. Now, however, Generative AI flips that dynamic. You can simply speak to it as you would a colleague, telling it what you want or need, and it responds. And it expands from there – you can also use the cameras in your devices so that GenAI can now “see” what is going on around you. So we are now talking about an interactive technology that can handle more “natural” human modes of interacting. Would this not take the interacting out of the standard computer <> human mode? How could that change how we can use computers, if inputs can be more informal and human-like? Is there some point in time in which we stop considering something a computer when the interaction is so richly human, that it’s impossible to tell it’s a machine?

BUILDING THE SPIDERWEB

Beyond deliberate forces like scientific research or capitalism, technological change stems from two fundamental human impulses. The first is necessity: the drive to solve an immediate problem to a pressing problem, as the old saying “needs must” suggests. The second is ingenuity: the opportunistic spark that sees a new purpose for an existing tool, a process captured by the thought, “I could use it for…”

“Needs must” is an old-fashioned phrase, which in full is noted as “needs must when the devil drives”. In more modern terms, this means that when pushed into a corner, humans will do what they need to in order to survive. Perhaps the clearest example of this is in Ukraine. Before the latest Russian invasion, on 24 February, 2022, Ukraine had been using drones like most other populations. There was amateur aviation excitement, and then there were the uses that started to follow on top of that. One of the clearest was when Ukraine, an agricultural country, started using drones to monitor crops. Following that February date, however, priorities shifted hard, and survival became key. Ukraine’s adaptation of drones to assist with military endeavors has been second to none, culminating in an attack on Russia’s bomber bases in early June 2025 which will most likely be viewed as a turning point in modern warfare.

Beyond being pushed to adapt, humans are also very clever, and will look at a tool and say, “I wonder if I could use if for…”, with a fill-in-the-blank for each specific situation. One need look no further than people wielding iPhone flashlights in restaurants to read menus to realize these changes can be small, but can also add up to bigger adjustments. The mentality with which people now approach their smart phones is different, as each new use has been uncovered. Smart phones now contain compasses, levels, and magnifiers – uses that are well beyond a telephone. That same ‘I wonder if…’ mentality has taken flight with drone technology, as users quickly looked past the recreational use and began adapting it for complex real-world problems. Drones are now used regularly in a number of different applications, such as real estate listings, marine biology studies, and post-disaster evaluations. These are times when humans have looked at the technology available, with its advertised usage, and thought about different ways they could apply the same thing to their own, unique problems.

MORE THOUGHTS + A TIP

There’s a reverse side to the human <> technology adoption bond. While humans adapt to new technology, especially as the rate of technological change has accelerated, humans also reject what doesn’t really work or is simply impractical. 3D movies and VR headsets generated a lot of buzz upon their debut; however, they failed to achieve broad adoption instead becoming a niche purchase. They are nowhere near as ubiquitous as the initial hype would have led you to believe. Given that, it seems fair to say that humans are actually pretty good at sorting through what works for them and what doesn’t.

Where humans don’t do as well is in predicting where things will go. What will the new technologies provide, what will the consequences be of a new technology, and what will a new technology render obsolete. We humans can learn along the way, but that is retroactive. Businesses now put Acceptable Use Policies (AUPs) in place to help guide employees in how to use technology. Laws are enacted when it becomes clear that certain criminal behaviors have emerged, enabled by new technology. But really, there is nothing more that humans can do when trying to predict the future except to look to the past for clues, understand how we have adapted technologies in the past, and make educated guesses as to how humans might bend a new technology to our creativity.

In an environment of rapid technological change, this wait-and-see attitude is hard for businesses. Too late to adopt, and you have missed the boat. You’re left standing on the shore watching your competition reap the benefits of whatever efficiency gains, transformations, or simply cool new features they’ve been able to use. There is one point of mitigation here – changing how business interacts with the technology landscape. As Loïc Houssier noted in a recent Practical AI podcast, a great tip for companies trying to navigate new technology complexities is that the companies need to check their stance on technology every 2 weeks in this type of environment. It’s no longer practical to evaluate your business’ response to technology advancements and how you will use them on an annual or even quarterly basis. And while a 2 week frequency seems a bit much, the message is clear – one should be aware of this landscape and determine what cadence works best, leaving behind calendar-enforced intervals for something more pragmatic. The organizations that transform technology evaluation from a periodic, high-stakes gamble into a continuous, low-risk process of discovery will ultimately become the ones that leverage new technologies successfully while others are disrupted by them.

CONCLUSION

The future is exciting. The pace of technological change has continued to accelerate, and humans are looking at some very real potential impacts, both good and bad, in the not too distant future. The part that gets sticky is what those impacts will truly be, and how new technologies will actually be used. A thoughtful review of a new technology, along with an imaginative overview of different problems humans have tried to solve over the years, could lend some ideas. And it is from that very same font of ideas that other changes will emerge, as the relationship between humans and their tools deepens.

Bibliography

Benson, Chris, & Whitenack, Daniel. (Hosts). (2025, May 20). Emailing like a superhuman [Audio podcast episode]. In Practical AI.

Imperial War Museum (IWM). (date unknown) A Brief History of Drones. IWM Website. https://www.iwm.org.uk/history/a-brief-history-of-drones

O’Connor, Patricia T., & Kellerman, Stewart. (2017, January 23). Needs must when the devil drives. Grammarphobia. https://grammarphobia.com/blog/2017/01/needs-must.html

Adams, Paul, & Lukiv, Jaroslav. (2025, June 2). Ukraine drones strike bombers during major attack in Russia. BBC. https://www.bbc.com/news/articles/c1ld7ppre9vo

Rosen, Meghan. (2025, June 20). Cancer DNA is detectable in blood years before diagnosis. ScienceNews. https://www.sciencenews.org/article/cancer-tumor-dna-blood-test-screening

Leave a Comment