Which photo represents The Dawn of the Age of Artificial Intelligence?
Greg Brockman is likely a name you do not recognize even though he is on the 2018 Forbes List 30 Under 30 - Enterprise Technology. He's 29. The Forbes entry reports his education as follows: "Drop Out, arvard University; Bachelor of Arts/Science, Massachusetts Institute of Technology (also drop out)." It tells you he resides in San Francisco."Halt and Catch Fire (HCF): An early computer command that sent the machine into a race condition, forcing all instructions to compete for superiority at once. Control of the computer could not be regained." - title card for the TV series Halt and Catch Fire
In computer engineering, Halt and Catch Fire, known by the assembly mnemonic HCF, is an idiom referring to a computer machine code instruction that causes the computer's central processing unit (CPU) to cease meaningful operation, typically requiring a restart of the computer. - Wikipedia
Brockman testified before Congress Tuesday on artificial intelligence (AI). Thinking about Brockman reminded me of....
Halt and Catch Fire
In its first season, the TV series Halt and Catch Fire won the Critic's Choice Television Award for Most Exciting New Series. By the third season it had a 96% approval rating on Rotten Tomatoes. In its fourth and final season which received critical acclaim it held a 100% approval rating on Rotten Tomatoes.
Per Wikipedia: "Taking place over a period of ten years, the series depicts a fictionalized insider's view of the personal computer revolution of the 1980s and later the growth of the World Wide Web in the early 1990s."
Halt and Catch Fire told the story of a few people who found themselves in the middle of the creation of technology that thus far has driven the 21st Century. It aired on AMC from June 1, 2014, to October 14, 2017.
On February 6, 2018, it won the Women's Image Network Awards award for Best Drama Series. For the show offered the best representation of women in tech and management in ways you would have a hard time finding elsewhere.
For someone who was involved with computers beginning in the 1970's and 1980's the show was a historical piece, a story of the late 20th Century, and well done. It also reminded me of how young and naive we were - unaware of the real implications of what we were doing.
One programmer observed: “I know that something’s coming, something big, like a train, and all I want is to jump on board. But it’s getting faster and faster and I’m terrified I’m going to miss it … I don’t want to get left behind.”
With foresight, another young staffer in his suicide note warned: “Beware of false prophets who will sell you a fake future, of bad teachers and corrupt leaders and dirty corporations … But most of all beware of each other, because everything is about to change. The world is going to crack wide open. The barriers between us will disappear, and we’re not ready. We’ll hurt each other in new ways. We’ll sell and be sold. We’ll expose our most tender selves only to be mocked and destroyed. We’ll be so vulnerable and we’ll pay the price.”
Airing in the second decade of the 21st Century, the series offers hindsight which sometimes provides us with insight regarding current activities. And yet, relatively few Americans watched it. And why would they?
After all as late as 2006 United States Senator Ted Stevens was reflecting the average American's understanding of the technology that could make or break their employer in that decade:
Even recognizing that Stevens was just one member of Congress, most of us who were involved in the computer industry in the 1970's and 1980's, who also had governmental/political involvement, understood that the 19th Century U.S. Constitution was entering a "Halt and Catch Fire" condition. Because it is government, it would take about a decade before the need to "reboot" our federal government with all new "machine code" uploaded would become obvious.Ten movies streaming across that, that Internet, and what happens to your own personal Internet? I just the other day got… an Internet was sent by my staff at 10 o'clock in the morning on Friday. I got it yesterday [Tuesday]. Why? Because it got tangled up with all these things going on the Internet commercially.
...They want to deliver vast amounts of information over the Internet. And again, the Internet is not something that you just dump something on. It's not a big truck. It's a series of tubes. And if you don't understand, those tubes can be filled and if they are filled, when you put your message in, it gets in line and it's going to be delayed by anyone that puts into that tube enormous amounts of material, enormous amounts of material.
And indeed in 2016 the need to "reboot" our federal government with all new "machine code" did become obvious, with the Russian interference in the Presidential Election based solely upon the use of primary goal of the American corporate internet - advertising to make corporations rich. And indeed in 2016 the need to "reboot" our federal government with all new "machine code" did become obvious with the effective use of internet social media by a reality game show host who had no previous political or government experience to get himself elected President.
The fact "it won't work anymore" came from knowning that Ted Stevens chaired the United States Senate Committee on Commerce, Science and Transportation. And because of his very limited knowledge about 21st Century technology he used the "series of tubes" metaphor to criticize a proposed amendment to a committee bill which would have prohibited Internet service providers such as AT&T, Comcast, Time Warner Cable and Verizon Communications from charging fees to give some companies' data a higher priority in relation to other traffic.
And today while Congress members are somewhat better versed on the 50-year-old technology, their median expertise level is only slightly better than knowing how to watch cat videos on YouTube. Even their staffers are most certainly not at the level necessary to begin the process of regulating Artificial Intelligence. The "cat video" level of knowledge (along with a predisposition to listen to corporate lobbyists in order to fund reelection campaigns) is why in the United States achieving privacy and security on the internet through Congressional action will never happen.
The Dawn of the Age of Artificial Intelligence
On Wednesday November 30, 2016, Greg Brockman gave his first testimony on Capitol Hill to the Senate Commerce Subcommittee on Space, Science, and Competitiveness. The subject matter of the hearing was "The Dawn of the Age of Artificial Intelligence" and the Chair of the Subcommittee was a different Senator named Ted. Here are some of the hearing opening remarks from Senator Ted Cruz:
As did a number of leaders in the AI industry, Brockman gave an extensive presentation. Here are some key points:Today, we’re on the verge of a new technological revolution, thanks to the rapid advances in processing power, the rise of big data, cloud computing, mobility due to wireless capability, and advanced algorithms. Many believe that there may not be a single technology that will shape our world more in the next 50 years than artificial intelligence. In fact, some have observed that, as powerful and transformative as the Internet has been, it may be best remembered as the predicate for artificial intelligence and machine learning.
Artificial intelligence is at an inflection point. While the concept of artificial intelligence has been around for at least 60 years, more recent breakthroughs...have brought artificial intelligence from mere concept to reality.
Whether we recognize it or not, artificial intelligence is already seeping into our daily lives. In the healthcare sector, artificial intelligence is increasingly being used to predict diseases at an earlier stage, thereby allowing the use of preventative treatment, which can help lead to better patient outcomes, faster healing, and lower costs. In transportation, artificial intelligence is not only being used in smarter traffic management applications to reduce traffic, but is also set to disrupt the automotive industry through the emergence of self-driving vehicles. Consumers can harness the power of artificial intelligence through online search engines and virtual personal assistants via smart devices, such as Microsoft’s Cortana, Apple’s Siri, Amazon’s Alexa, and Google Home. Artificial intelligence also has the potential to contribute to economic growth in both the near and long term. A 2016 Accenture report predicted that artificial intelligence could double annual economic growth rates by 2035 and boost labor productivity by up to 40 percent.
Furthermore, market research firm Forrester recently predicted that there will be a greater-than-300-percent increase in investment in artificial intelligence in 2017 compared to 2016. While the emergence of artificial intelligence has the opportunity to improve our lives, it will also have vast implications for our country and the American people that Congress will need to consider, moving forward....
Today, the United States is the preeminent leader in developing artificial intelligence. But, that could soon change. ...Ceding leadership in developing artificial intelligence to China, Russia, and other foreign governments will not only place the United States at a technological disadvantage, but it could have grave implications for national security.
We are living in the dawn of artificial intelligence. And it is incumbent that Congress and this subcommittee begin to learn about the vast implications of this emerging technology to ensure that the United States remains a global leader throughout the 21st century. This is the first congressional hearing on artificial intelligence....
Tuesday's joint meeting of the House Subcommittee on Research and Technology and Subcommittee on Energy offers insight into how it is when technology advances at the hands of young creators, even ones who are concerned about the deficits in the process. You can watch it on YouTube (note: the action doesn't start until 22 minutes into the video):I’m Greg Brockman, Co-Founder and Chief Technology Officer of OpenAI. OpenAI is a nonprofit AI research company with a billion dollars in funding. Our mission is to build safe, advanced AI technology, and to ensure that its benefits are distributed to everyone....
The U.S. has led essentially all technological breakthroughs of the past 100 years. And they’ve consistently created new companies, new jobs, and increased American competitiveness in the world. AI has the potential to be our biggest advance yet.
Today, we have a lead, but we don’t have a monopoly, when it comes to AI. This year, Chinese teams won the top categories in a Stanford annual image recognition context. South Korea declared a billion-dollar AI fund. Canada actually produced a lot of the technologies that have kicked off the current boom. And they recently announced their own renewed investment into AI.
So, right now I would like to share three key points for how the U.S. can lead in AI:The first of these is that we need to compete on applications. But, when it comes to basic research, that should be open and collaborative....
The second thing...is that we need public measurement and contests. There’s really a long history of contests causing major advances in the field. For example, the DARPA Grand Challenge really led directly to the self-driving technology that’s being commercialized today. ...Measures and contests help distinguish hype from substance, and they offer better forecasting. ...Good policy responses and a healthy public debate are really going to depend on people having clear data about how the technology is progressing. What can we do? What still remains science fiction? How fast are things moving? So, we really support OSTP’s recommendation that the government keep a close watch on AI advancement, and that it work with industry to measure it.
The third thing that we need is that we need industry, government, and academia to start coordinating on safety, security, and ethics. The Internet was really built with security as an afterthought. And we’re still paying the cost for that today.Academic and industrial participants are already starting to coordinate on responsible development of AI. For example, we recently published a paper, together with Stanford, Berkeley, and Google, laying out a roadmap for AI safety research. Now, what would help is feedback from the government about what issues are most concerning to it so that we can start addressing those from as early a date as possible.
...The best way to create a good future is to invent it. And we have that opportunity with AI by investing in open, basic research, by creating competitions and measurement, and by coordinating on safety, security, and ethics.
The problem is the expert witnesses are asking the technology challenged, AI-uninformed to create regulations, an ethics system, when the experts themselves are unable to know and describe what problems are likely to arise from a technology level that does not exist and has never been tested.
The baseline example is the so-called "autonomous" vehicle. In that case, the first step is to acquire a dictionary and discover "autonomous" means "existing or capable of existing independently, not subject to control from outside."
In other words, an autonomous vehicle will decide where it's going and what route it's taking, and also drive itself there. Would you climb into such a vehicle, perhaps right after you named it "Hal" (and if you don't recognize that reference, you do need to stream the movie 2001: A Space Odyssey).
On the other hand, a "self-driving" vehicle is capable of driving to the destination on the roads you tell it to, hopefully safely without your intervention through the controls such as the steering wheel or brakes.
Despite the fact that self-driving-capable vehicles exist, American governments are having trouble regulating them and there are no ethical nuances involved.
In 2016 Brockman observed: "The Internet was really built with security as an afterthought. And we’re still paying the cost for that today." That was after the 2016 election but before the full scope of the Russian interference problem was known. Unfortunately, we have no answers for the security problem that does not in some way interfere with either individual freedom or individual privacy or both.
I cannot even begin to imagine the operating assumption that will go into real AI, assumptions that will turn out to be false - you know, the ooops of technology. I cannot even begin to consider the complex ethical and moral issues that will arise even if the AI is not in the form of a Dolores (pictured to the right at the top of this post), Bernard, Maeve, or Teddy.
Government? In considering and effectively dealing with such a complex issue as AI and with the opinions of hundreds of millions of people slowly learning about AI, you're looking a two decades of debate. Then, of course, it will be too late to have avoided layers of crises.
If you think I'm wrong, you may want to read the paper prepared for the Academy to the Third Millennium February 1997 Conference Internet & Politics entitled "Regulation and Deregulation of the Internet." Presented by Columbia University by professor of Finance and Economics and Paul Garrett Chair in Public Policy and Business Responsibility Eli Noam who is the director of the Columbia Institute for Tele-Information (CITI).
In order to set context, let me return to the TV series Halt and Catch Fire. Episode 1 of Season 3. The year is 1986. The place is Silicon Valley. And Mutiny, the little internet startup that could, is celebrating a 100,000-person user base and independence from the outsourced servers it once relied upon to keep itself running. Let me repeat - the year is 1986.
Noam's presentation was given in February 1997, over a decade after real life young tech nerds like those depicted in Halt and Catch Fire were establishing the internet. His presentation was over a decade after the internet became obvious to many and nearly 20 years before the Russian interference in the U.S. Presidential election in which the only candidate who knew how to effectively use social media (because he was the only non-politician) won. In that 1997 context Noam muses:
It has been 55 years since Americans began to use something resembling today's internet. The earliest ideas for a computer network intended to allow general communications among computer users were formulated in April 1963 by computer scientist J. C. R. Licklider in memoranda discussing the concept of the "Intergalactic Computer Network". Those ideas encompassed many of the features of the contemporary Internet. In October 1963, Licklider was appointed head of the Behavioral Sciences and Command and Control programs at the Defense Department's Advanced Research Projects Agency (ARPA). Funded by ARPA of the United States Department of Defense, ARPANET became the technical foundation of the Internet.A myth is going around that has almost been elevated to the status of platitude: “you cannot regulate the Internet.” There is a related myth, that “a bit is a bit,” that no bit can therefore be treated differently from any other, and that attempts at control are therefore doomed to fail. Both claims, though originating with technologists who implicitly seem to believe in technological determinism, are wrong even as a matter of technology....
Also, communication is not just a matter of signals but of people and institutions. For all the appeal of the notion of “virtuality,” one should not forget that physical reality is alive and well. Senders, recipients, and intermediaries are living, breathing people, or they are legally organized institutions with physical domiciles and physical hardware. The arm of the law can reach them. It may be possible to evade such law, but the same is true when it comes to tax regulations. Just because a law cannot fully stop an activity does not prove that such law is ineffective or undesirable.
This, most emphatically, does not mean that we should regulate cyberspace (whatever it is). But that is a normative question of values, not one of technological determinism. We should choose freedom because we want to, not because we have to. And that choice will not be materially different from those which societies generally apply. As the Internet moves from a nerd-preserve to an office park, shopping mall, and community center, it is sheer fantasy to expect that its uses and users will be beyond the law. This seems obvious. Yet, for many, the new medium is like a Rorschach test, an electronic blob into which they project their own fantasies, desires and fears for society. As the Russians say: Same bed, different dreams. Traditionalists find the dark forces of degeneracy, as in everything. Libertarians find an atrophy of government. Leftists find a new community, devoid of the material avarice of private business. This kind of dreaming is common for new and fundamental technology, and it is usually wrong.
A society’s choice of rules will depend, among other things, on its willingness to accept risk. The Internet is new and unchartedterritory. The term “electronic frontier” is quite apt. As it happens, America has been in the frontier business for a long time. It’s good at it. It’s its defining characteristic,together with liberty and free enterprise. No wonder then that America is atthe leading edge of the information age.
It is a common fallacy to over-estimate the short term but to under-estimate the long term. Thus, we over-estimate the short-termability of electronic communications to be free of government controls, because it is believed that “you can’t regulate the Internet.” But the long-term is another matter. The long-term leads to entirely new conceptsof political community. Just as traditional banks and traditional universities will decline, so will traditional forms of jurisdiction. A few years ago, it became fashionable to speak of communications creating the"global village."-- communal and peaceful. But there is nothing village-like in the unfolding reality. Instead, groups with shared economic interests are extending national group pluralism through the opportunity to create global interconnection with each other into the international sphere. The new group networks do not create a global village, they create instead the world as a series of electronic neighborhoods.
Communications define communities, and communities define politics. Thus, the breakdown of the coherent national communications system reflects and accelerates a fundamental centrifugalism that will reshape, in time, countries and societies. We are barely at the beginning of this evolution, and the forces of resistance are only beginning to fathom the impacts.
To put it simply, it has been over a half a century since the U.S. Government began creating the internet. In the face of various national security agencies needs for access to everyone's data which the private sector apparently already has, Congress is struggling with how to establish any semblance of security and privacy in the face of what was created by funding approved by...Congress.
In April the following headline appeared in Newsweek AI Candidate Promising ‘Fair and Balanced’ Reign Attracts Thousands of Votes in Tokyo Mayoral Election. That article seemed just about as informed about AI as Congress. In a different source we are offered a more prescient perspective:
Even though he lost the election, Matsuda's effort could lead to a possible discussion regarding whether politicians or AI could do a better job at governing. And so long as Americans keep voting for candidates they can socially relate to, the answer some day could be AI. Or not as discussed in AI 101: Why AI is the Next Revolution–or Doomsday.Whether it's samurai robots, a hotel staffed by robots, or AI girlfriends, it seems that it safe to say that one should keep their eye on Japan when it comes to developments in the field of artificial intelligence. So while it seemed a foregone conclusion that AI would eventually break into the world of politics, the way we're seeing it do so in one city in Japan is a bit surprising. The mayoral election of Tama City in Tokyo is featuring its first "AI candidate".
At least, that's what one can take from the promise of mayoral candidate Michito Matsuda. Matsuda has chosen to throw his hat into the election but is deferring to an AI-powered robot avatar, as he intends to maximize the use of artificial intelligence and rely on it heavily in the running of his municipal administration.
As he writes on his Twitter account (which is run in character in an AI persona), "For the first time in the world, AI will run in an election. Artificial intelligence will change Tama City. With the birth of an AI-Mayor, we will conduct impartial and balanced politics. We will implement policies for the future with speed, accumulate information and know-how, and lead the next generation."
I'm not sure which will be more disruptive to humanity - AI or Climate Change. But I can tell Greg Brockman and Dr. Fei-Fei Li that the Congressional testimony they gave earlier this week is almost a complete waste of time.
California Joins the EU
With all of that said, California - the home of Silicon Valley - is attempting to step in where Congress has failed essentially by adopting online privacy rules consistent with the General Data Protection Regulation (GDPR) of the European Union that became effective May 25.
The GDPR begins with a simple statement: "The protection of natural persons in relation to the processing of personal data is a fundamental right. Article 8(1) of the Charter of Fundamental Rights of the European Union (the ‘Charter’) and Article 16(1) of the Treaty on the Functioning of the European Union (TFEU) provide that everyone has the right to the protection of personal data concerning him or her." The term "natural persons" is used to distinguish humans from corporations in emphasizing human rights over economic constructs.
And so this week the California Legislature passed the toughest online privacy law in these United States. However, it doesn't take effect until January 2020 though, in order to allow the Silicon Valley corporations to prepare.
Under the new law, California consumers will have the right to:
- know all the data collected by a business and be able to transfer it twice annually for free;
- to opt out of having their personal information sold (but companies will then be able to charge those consumers higher fees);
- to delete their data;
- to tell a business it can't sell their data;
- to know why the data is being collected;
- to be informed of what categories of data will be collected before it's collected and to be informed of any changes to that;
- to be told the categories of third parties with whom their data is shared and the categories of third parties from whom their data was acquired;
- to have businesses get permission before selling any information of children under the age of 16.
As the Los Angeles Times noted With the federal government missing in action, California should set its own rules for internet privacy.
Regarding Artificial Intelligence, Brockman, Li, and their compatriots should move their advocacy effort for an AI regulatory/ethics structure to the California Legislature. That is because, as we've pointed out on a number of issues, it is the Progressive Pacific Message that must be advocated:
The problem is that if individuals use the California online privacy law, it is very likely that the U.S. Supreme Court, with its membership reflecting the privacy preferences of most of the folks not in the Pacific States, would ultimately overturn it as a proscribed interference in interstate commerce. That is because the U.S. Constitution as literally written by the Founding Fathers primarily provides for (a) the conduct international relations including military defense and (b) provides for the regulation of interstate commerce exclusively by Congress. It took amendments contained in what we know as the Bill of Rights to have any provisions for human rights and online privacy was not included.
The U.S. Government because of its "machine code" known as the Constitution is simply not capable of surviving the 21st Century.