top of page

5 Things You Need to Know About the AI Summer We Have Ahead of Us | Maxime Lubbers


Most of you probably noticed that a “certain letter” about Artificial Intelligence (AI) was released last week. This letter was initially signed by more than 1800 business professionals, academics and others asking for a six-month pause on AI research. Amongst the signers were Elon Musk, Steve Wozniak (co-founder of Apple) & Juval Noah Hari (author of Sapiens). This weekend the letter already received more than 12,000 signatures. In case you are wondering why this is the case, or why this letter was written at all, what was in it and why you should care, continue reading. In this article, I walk you through five things you need to know and explain why this letter could be pivotal for the future of technology and our society.


1. Why was this letter written?

Due to the fast pace of recent AI developments, we currently find ourselves in a time where creators of technology get afraid of their own designs. With the release of ChatGPT (of which the firm behind it was co-founded by Elon Musk) we speak about “advanced AI systems”. Advanced, or strong AI, describes programming that can replicate the cognitive abilities of the human brain. This already exists and means it can replace repetitive tasks that many human beings consider as their full-time job. This is great for some, as it can help in saving costs and improving the quality of life, but it could also have disastrous consequences in case these jobs are critical in the purpose of people’s lives - leaving them unfulfilled and without a paycheck at the end of the month.


Aren’t there any rules then, you might think, around the applications and implementations of AI? Well, there are. In the widely endorsed Asilomar AI Principles, which are one of the earliest and most influential sets of AI governance principles coordinated by the Future of Life Institute, it is stated that: “Advanced AI could represent a profound change in the history of life on Earth as we know it and should therefore be planned with extreme care”.


Unfortunately, this extreme care and planning is not happening. This is not something new. In many cases of technology innovation, the consequences of the technology cannot be understood or foreseen beforehand. No one probably expected that more than half of the people on Instagram feel an intense pressure to look a certain way due to filters that are offered. Or the impact Facebook and social media in general have on people feeling isolated, where it had the intention to connect people and build communities.


So, considering the amount of time we spend online and behind a screen, we need to start caring more and soon. As we increasingly start to understand where AI could lead, we should be thankful that the community wrote this letter (whereas the intentions of all the signers could be doubted). Without the awareness that is created by institutions like the Center for Human Technology or The Future of Life institute responsible for the release of this letter – we “civilians” might not even have noticed.


This “awareness” perspective on the letter is very constructive, “the letter is important” as it wakes us up around the unintended consequences of AI and advanced AI developments. The influential people that signed the letter might even have more information than we do and are even more aware of the dangers ahead. On the other hand, I am also skeptical. When discussing the letter this week with several people around me it became clear it is questioned what the intentions of some of the signers are. It could also be that some tech giants want the AI community to pause to gain competitive advantage with their own solutions over their competition (as they must pause too). Not even mentioning what this might do their reputation after much-discussed behavior around recent technical company take-overs.


2. What was in this letter?

Whether you share my positive or skeptical view, it’s a good thing we pay attention to the danger of emerging technology in general and AI in specific. Amongst other things, the following is written:

"AI systems with human-competitive intelligence can pose profound risks to society and humanity. Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable."

Where I agree on this statement, a 6-month break will probably be too short to get to the “right” agreements around AI – or any other emerging technology. It should be stated that these should be “living statements”. We should be very careful and aware of the fact that the regulations need to evolve over time. Businesses, academia, and legal departments need to collaborate to learn and adjust rules and regulations along the way. This would then lead to the requested set of “shared safety protocols”.

"AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt."

In addition, the letter calls for a halt on the “dangerous race” to develop systems more powerful than OpenAI’s recently launched ChatGPT-4. But, critically speaking, what is more powerful? We don’t know where we are headed. A definite needed and smart next step is thus the pause that is requested.


3. Why was it signed?

Whereas we have heard about AI projections for a long time, recent developments have shown to the masses what it means if tools are powered by AI. With the emergence of ChatGPT it has become evident to many how quick the technology learns, evolves, and takes over repetitive tasks. It has become clear to us that we need a pause to be able to ask critical questions. I’m thankful that many influential people from multiple disciplines recognize this and share this information via the open letter.

The only way forward is by taking a step back. Critical questioned and long-term thinking is required to prevent us from designing technology that will take over. Key questions asked in the letter:

-      “Should we automate all the jobs, including fulfilling ones?”

-      “Should we develop non-human minds that outsmart us?”

-      “Should we risk loss of control of our civilization?”

Please leave any critical questions you have in the comments box.


4. What does it mean for our world?

The release of this letter can be considered as groundbreaking and fundamental for the important discussions that need to take place. For many years multiple engineers and technologists have warned for the dangers of black boxes within digital technology and AI specifically. One very critical step is taken by raising the awareness on this global level. In the letter it is mentioned that by taking this pause “we can enjoy an – AI summer – in which we reap the rewards, engineer these systems for the clear benefit of it all and give society a chance to adapt”. It is optimistic to expect that our entire society will adapt in one summer, but it is a noble aim. Let’s hope the rules and regulations created are written down in a way that can be changed – as we will keep learning by doing – and become an example for other emerging technologies related to AI (e.g., The Metaverse, Robotics, Blockchain & Quantum).


5. What can you do?

If you wonder what the impact of AI is on yourself or your business, or in case you feel overwhelmed by emerging technology, I advise you to use the unique FOMO 2.0 framework to help you understand how to think about it. Many leaders are often opting out when they try to grasp the impact of a new technology. However, due to the extensive impact emerging technology has on our lives, there is no more time for fear. We need to think long-term. The time is now. Start by applying the FOMO 2.0 framework to your thoughts.

The FOMO 2.0 framework is a four-step process for individuals and organizations to approach emerging technology, such as AI. The steps are: Form your own opinion, Organize conflict, Merge insights, and Orchestrate action. The framework encourages individuals to first think critically about how the technology will impact their work and organization, then gather diverse perspectives and information from others, merge and synthesize those insights, and finally take action based on the new understanding gained. It is a tool to help leaders navigate the complexities of emerging technology and make informed decisions.


F: Form your own opinion – take some time to think about how Advanced AI will impact your day-to-day work, the work of your team and organization. Are you happy or unhappy? Do you get more time for creativity? Or do you fear losing your job?


O: Organize conflict – once you have created your own view, organize a meeting with 5-8 people with different functions. Ask them to do the same to get a better view of how people within your organization are looking at AI. You will understand your team better, but you will also get insights on how it might help you in reaching creative solutions.


M: Merge insights – make sure to do something with the new information and perspectives gained. Work together with your communications specialist and let people in your team or organization know what you know.


O: Orchestrate – act upon it. In 50% of the cases I reviewed in my research leaders gained new perspectives but didn’t share them appropriately throughout their team or organization. Don’t leave yourself and your people hanging. Leaders that care and adopt to changes around them get better results.

18 views1 comment
bottom of page