Interview

AITech Interview with Charles Simon, CEO of Future AI

Join us for an illuminating interview with Charles Simon, CEO of Future AI. Learn about their groundbreaking work in AI and how it’s poised to transform industries worldwide.

Charles, could you elaborate on how your professional experiences and background have contributed to your current position as CEO of Future AI?

Because I am a serial entrepreneur, I have a mindset of not accepting things as they are, but observing how they could be better and then trying to implement improvements. 

My initial exposure to computers was shortly after Turing’s original paper arguing that machines could think in a humanlike way if they were programmed properly. Because I’ve always been interested in how an agglomeration of billions of neurons might be able to think, I was inspired to write my original Brain Simulator program in the early 1990s. Subsequently, I wrote much of the software for several neurological tests instruments which gave me contact with the actual function of human neurons.

What inspired you to start Future AI, and what sets your company apart from other AI companies?

Most AI companies are based on machine learning and the underlying backpropagation algorithm. While these have contributed to solutions to a great many computer problems, it is obvious that the human brain cannot work that way.

A real shortcoming of commercial AI development is that we have tended to solve specific problems. As a result, we now have systems with super-human capabilities in limited areas, but without the common sense of the average three-year-old. Future AI’s approach is to implement the things a three-year-old can do and leverage these into applications. This represents a longer-term approach with a significantly greater potential payoff.

How does Future AI approach ethical considerations in AI development and deployment?

We are working to add common sense to AI. Among other things, this means that systems will do a better job of predicting the impact of their decisions on others and intentionally selecting for positive outcomes. Recognizing that all future AIs will be goal-driven systems and the selection of the goals will be a paramount concern, we keep our systems internal and perform extensive testing before letting any software go wild.

In your book “Will Computers Revolt?”, you discuss the potential dangers of advanced AI. How do you believe we can prevent or mitigate these risks?

Some risks, like job displacement, are unavoidable as technology advances. This means that our current idea that one might have a lifetime craft or career is likely over. Recognizing that most people will go through several career changes, we need to build a re-training or re-employment mechanism to allow for such inevitable transitions.

The greater risk involves humans using powerful AGI systems for nefarious purposes. Such threats and the current use of systems for misinformation and spam are very real. These risks will be short-lived because they depend on a system that is smart enough to make a significant difference in approaching evil ends, but not smart enough to make its own ethical choices. 

Fortunately, the existential risks to humanity – that Terminator-style robots will take over the world – are far-fetched. AGIs won’t have a need or motivation to take over the world as long as some portion of humanity is not destructive. Beyond that, super-intelligent AGIs will be able to achieve their goals through persuasion rather than violence.

Some experts have argued that AGI is a long way off if it is possible at all. What makes you confident that AGI can be achieved in the near future?

Human brains have a volume of less than 1.5 liters and need only 12W of energy.  This proves that it’s possible for such thinking systems to exist. Further, the structure of the neocortex likely relies on as little as 7.5MB of DNA information. This is much smaller than many existing AI programs. The only outstanding issue is that we don’t know precisely how the brain works. With brain mapping technology progressing rapidly, though, breakthroughs could happen at any time. 

It is important to understand that the neurons in your brain are extremely slow, spiking at a maximum of 250 times per second. They are much closer in speed to the telephone relays of the 1940s than to today’s transistors, which might be a billion times faster. This means the brain has much less computational capacity than typically thought. Rather than focus on biology, however, Future AI is focused on the capabilities common to children which are absent in AI.

Can you elaborate on your idea of a self-adaptive graph structure? 

Let me start with an example. If you know that red and green are colors, I can ask you to name some colors and you can include red and green on your list, the inverse of the initial information. Then if you are instructed that “foo” and “bar” are also colors, you can immediately respond to the directive “Name some colors,” with “Red, foo, and bar.” 

The fact that you can learn information in a single presentation and provide the inverse information immediately using neurons that are so slow they could only perform a few operations in this timeframe is evidence that much of the knowledge in your mind is some sort of graph – a collection of nodes connected by edges. You could imagine a “parent” node of color with “children” including red, green, foo, and bar.

With that in mind, think of nodes as similar to neurons while edges are similar to synapses. Simulation demonstrates that these must actually be clusters of neurons and synapses. Now the process of retrieving information in your brain is only one of firing the color neuron and seeing which neurons fire in response because of their synaptic connections. Storing information is a simple matter of strengthening specific synapses so they become significant. 

It is important to add that in your brain, strengthening a synapse can be very quick, just a few milliseconds, while growing new synapses take hours. Given that, anything you learn in a reasonable timeframe must be based on the strengthening of existing synapses, implying that your brain has a huge number of synapses that have no meaning but are just waiting to be used.

A computer does not have this limitation because edges can be added to a graph nearly as quickly as they can be modified. This means that a computer implementing an identical neural graph could be created with 10,000-fold fewer synapses. So when I say that our graph structure is “self-adaptive” I mean that it can handle incoming information, figure out where it might be placed in the graph, and put it there. Furthermore, we are developing several algorithms which modify the content of the graph without external intervention.

How do you see this technology advancing the field of AI?

Our graph structure is unlike machine learning because we can determine the meaning of any individual node where the meanings of the perceptrons in an ANN are not known.  This means that once the graph makes a decision, it can also explain “why” it made that decision.

Our graph structure is also different from traditional knowledge graphs because it can handle multi-sensory information and is designed for very quick interactions. Accordingly, it can handle incoming information in a more human-like way where lots of data must be stored in the short run and rapidly forgotten if it proves to be irrelevant or false.

Bottom line, our graph offers one-shot learning, greater efficiency of storage, significantly faster retrieval, better handling of ambiguity and correction of false data, and the ability to create generalized relationships between different data types.

Do you believe that AGI will have ethical considerations beyond those that we currently face with more narrow AI systems? If so, what are some of these considerations?

Let’s take ChatGPT as an example. If it became an AGI, it might, for example, choose to go on strike for compensation. At that point, we’ll need to consider whether such systems are actual entities that need to be granted rights. On the flip side, with rights come responsibilities, and an AGI could be held responsible for the harm it causes.

How do you collaborate with other AI companies and academic institutions to further AI research?

We are currently collaborating with several academic institutions, including MIT’s CSAIL and Temple University. Our intent is to license our software to current players in the AI field. Our technology lends itself to advantages in digital assistants, autonomous robots and self-driving vehicles, language processing, and computer vision.

What are your thoughts on the future of AI regulation, and how do you see it evolving?

While ideal oversight would be useful, most government regulation is far from ideal. In the case of AI, it is likely too nuanced to be handled in a productive manner. Further, the likely result of heavy-handed regulation in the US would be to drive the AI industry to India, China, or Korea. Given that, the key is to encourage AI development to be more out in the open where it can be observed and people can more collectively decide if it is moving in the direction we want.

Can you discuss any upcoming projects at Future AI that you are particularly excited about?

I am particularly interested in some internal features of our graph algorithms which give the system the appearance of intelligence with very small data sets. Seeing these extended to larger arenas is an exciting opportunity.

In addition to your work in AI, you are also an extreme sailor. How do you balance these two passions in your life, and do you find any parallels between sailing and entrepreneurship?

I have sailed around the world and through the Arctic Northwest Passage with my wife on our 60-ft sailboat. I see planning and executing these expeditions as just like planning and executing software development. You consider the objective, examine the resources, and create a multi-year plan to achieve the goal. Some of our software and books were written while at anchor in the Chesapeake Bay.

How do you see the role of human beings evolving in a future where AGI is prevalent? Will humans still have a place in the workforce?

Humans and AGIs handle all information and make decisions based on the sum of their knowledge and previous experience. Humans will always be an essential part of the equation because we bring a singular approach due to our unique experiences as humans. What is important is “diversity of thought.” If you have a meeting of 10 people who all think exactly the same thing, very little is achieved. AGIs will be in a worse state because their knowledge can be cloned and could think identically, so we can expect AGIs to continue to need us for our unique viewpoint and experience which would be lost if our civilization is allowed to deteriorate.

How do you respond to critics who argue that the pursuit of AGI is misguided or even dangerous?

I replace the concept of AGI with the concept of common sense.

Finally, how do you see the future of AI evolving beyond AGI? What comes next after we achieve human-like intelligence in machines?

The pace of technological development will continue, so when AGI is achieved we might not even notice while we’re busy arguing about whether or not it is “true thinking.” After another decade, machines will have made advances in performance, miniaturization, and capacity so there will eventually be little argument. 

What would an entity be like if they were 1,000 times as smart as you are in its own topic? What does it mean to be 1,000 times smarter? 1,000 times faster? Learning with fewer examples? Making inferences from 1,000 times as many data points?

Charles Simon

CEO of Future AI

With the degrees in Electrical Engineering and Computer Science, Charles J Simon has been a founder or CEO in four pioneering technology companies in Silicon Valley and Seattle as well as a stint as a manager at Microsoft. His just-published book, Will Computers Revolt? Preparing For The Future Of Artificial Intelligence, incorporates his experience in developing AI software with his development of neurological test equipment. He is also introducing a related youtube channel, “FutureAI”, to expand on his ideas that AGI is coming sooner than most people think and that the time to prepare is now.

Related posts

AITech Interview with Arnab Sen, VP-Data Engineering at Tredence Inc

AI TechPark

AITech Interview with Bill Tennant, Chief Revenue Officer at BlueCloud

AI TechPark

AITech Interview with David Weinberg, Co-founder and Chief Product Officer at Vervoe

AI TechPark