Steven Levy Says Everyone Wants To Regulate AI But No One Can Agree How. Steven, The Answer Is Simple And Proven.

Steven Levy Says Everyone Wants To Regulate AI But No One Can Agree How. Steven, The Answer Is Simple And Proven.

In a 26th May Wired newsletter titled “Everyone Wants to Regulate AI. No One Can Agree How” Steven Levy explores efforts to regulate AI development. A Forbes headline four days later sums it up: “AI Could Cause Human ‘Extinction,’ Tech Leaders Warn.”

In what must be a first among appeals to the US Congress about regulation, the CEO of one of the major AI industry players, Open AI’s Sam Altman, urged the US Congress to regulate his business lest his own product becomes a devouring monster.

Remarkable as that is, even more remarkable is the assumption that regulation from a nation that represents less than five percent of the world’s population somehow applies to AI developers in every nation. Seriously, their assumption is that somehow the fine engineers coming out of India’s IIT are going to obey rules that are promulgated in Washington DC.

Some, like American commentator Kara Swisher, recognize the fallacy that is somehow not obvious to Altman and others. A source of governance with global jurisdiction is called for.

The City of Osmio is an online municipality whose original charter was written on March 7, 2005, at the Geneva headquarters of the International Telecommunication Union, a United Nations agency. Osmio’s jurisdiction is global. Its purpose is to provide a certification authority to the digital world. Osmio is the entity that signs your digital identity certificate that’s bound to that digital signing PEN (also called your “Privacy PEN”. PEN stands for Personal Endorsement Number. You PKI jocks will recognize that as a type of private key.)

It turns out that the solution to that problem is exactly the same as the solution to the problem of lack of accountability in artificial intelligence.

Shortly, we’ll show you that a very old method can ensure that AI remains under the control of human beings. But even before that, we need to show you a very reliable old technology that needs to see the light of day because it’s never been needed more than now. It’s so well-proven and so incredibly useful that it’s truly astounding that so few people know about it.

That technology is the TDS or True Digital Signature. (True digital signatures are not the same thing as “electronic signatures.”)

Let’s take a minute to show how this old technology works.

0:00
/

If I digitally sign any file – a contract or image or video or program code – any digital file – and send the signed file to you, you can know for certain that I signed it and that not a single bit has been changed since I signed it. The fundamental technology behind true digital signatures was created back in the seventies by the same British cryptography team where decades earlier Alan Turing had shortened World War II by cracking the German Enigma secret communication codes.

All well and good, you say, but how do I know the signer is really who they claim to be?

The solution to that one is a youngster, first published a mere six years ago when the US National Institute of Standards and Technology – NIST – created its 800-63 measure of the reliability of an identity claim. Subsequent developments such as Osmio IDQA add some technology to that methodology, binding your identity reliability score to the public number that goes with your digital PEN that signs the file. So now you not only know that the file was signed by the human being who owns this digital PEN and that nothing has been altered since they signed it; you also know how much you can trust that they are really who they say they are.

To put true digital signatures to work solving the problem of AI accountability we’ll also need a method that’s even older than true digital signatures. This old method does what a lot of government regulation attempts to accomplish.

Demands for regulations get so much attention because regulations are so difficult to get right. The endless debate about amounts and types of regulation keeps the subject in the news.

Meanwhile, professional licensing accomplishes much of what regulation attempts to accomplish – and does it so quietly and effectively that it seldom makes it into the news.

Two things go into professional licensing. The first is an attestation of competence, established through testing and other methods.

But the more important part of professional licensing is acceptance of liability. For instance, before a new building can be occupied, the professionally licensed architect, contractor, structural engineer, and building inspector must put their livelihoods and reputations on the line by authorizing the issuance of an occupancy permit.

It’s not some committee of bureaucrats that examines those ibeams, it’s the individual human being who gets paid well for accepting the consequences if they mess up. Those consequences include searching for another job, one that doesn’t pay nearly as well. Did I mention that professional licensees get paid well for accepting liability?

What if any AI program that’s capable of presenting itself as a human being had to be digitally signed by a professionally licensed human AI handler? Any user of that AI program would know who is legally responsible for what it does.

This combination of the old technologies of true digital signatures and identity reliability metrics bound to credentials, along with the even older methodology of professional licensing, can solve not only the problem of control of AI but many other problems borne of technology as well.

For more, see this link.