top of page

+ OUR STORY

  • › My son Ethan (a computer science major headed for AI dev) shows me text-to-image generation (Midjourney and Stable Diffusion). 

    › He says, "Dad, these can create totally novel pictures using whatever words you give them. Try it!"

    › I reply, "Tell it to draw a half-Asian guy installing a vent fan in a small bathroom, angry." (me)

    › It produces several stylized graphics, impressionistic renderings, and even photorealistic versions of me doing exactly what I was doing in that instant–images that didn't exist in the universe 30 seconds earlier. Some are not remarkable, but others are staggeringly good.

    › That's crazy, son. I feel bad for portrait artists.

SUMM 2022

NOV 2022

  • › Ethan shows me ChatGPT. 

    › We play around with it and have it write us a few prosaic pieces, essays, poems, and other random content.

    › Wow. This is insane. Students are going to love this. Teachers, not so much.

    › What does it all mean?

DEC 2022 – FEB 2023

  • › Media coverage of AI advancement has increased dramatically. AI-related stories feature multiple times daily on every channel. 

    › Reporting levels range from alarmist to verifying AI's benefits to humanity.

    › Theme: no one knows what's going to happen.

MAR 2023

  • › Remember that one time when the co-founder of Apple and the Let's-go-to-Mars-guru asked, "Can we just tap the brakes a minute?" 

    › Haha! What?

    › This news physically stopped me in my tracks.  I had to stop and think about what I was hearing.

    › There are signs of fear and chaos from behind the curtain.

  • › In an email to his newsletter list titled "Houston, we have a problem." World-respected Jeremy Cowart describes being unable to tell the difference between some photos produced by AI and actual photos of real people (taken by real people).

    › If Jeremy Cowart (who has done global projects for the UN and studied hundreds of thousands of human portraits meticulously) can't tell the difference, who can? Houston, indeed we have a problem.

  • › Why is AI OK to make better phones and cars but not jeans? Hm. I don't know. 

    › There is something about people's relationship with their Levis that they don't want AI to mess with.

    › There is some illusory human sensibility at work, but the dynamic is difficult to understand or articulate.

APR 2023

  • › My good friend, Brian Conn, and I went to lunch and ruminated on all the preceding fodder as we enjoyed a couple of delicious wedge salads.

    › By the end of lunch, we had arrived at the following conclusion: A day is coming when the default assumption will be, "I am going to assume AI made this unless you can prove otherwise." It is not only plausible, not only possible. It is almost inevitable.

    › And it's a massive problem.

  • › How can our constitutional system affect the outcome?  This is a good question.

    › Laws can't be written big enough for this. Laws are only societal strictures established to govern most public human behavior. Laws are shaped, guided, and powered by an appeal to common ethical assumptions.

    › E.G., We do not kill people just because it's against the law. And most people try to do their best to parse the impossibly complex tax code and be honest in reporting taxes. Though the fear of getting audited is real, the chances are slim. Most people are honest because they don't want to commit fraud and cheat the system consciously.

    › Most people care about being as honest as they can. The power of the ethic of truth and honesty should not be dismissed. It is paramount in this effort.

    › In short, the human ethical spirit remains intact, and we must leverage it.

  • › I have been following the digital watermarking or block-chain style embedding discussions as I can, but I haven't seen the relevance. AI doesn't simply copy (or plagiarize)–AI produces novel content. AI is creating "original" work. And it's not copying anyone.  It's copying everyone.

    › I also have a hunch that, try as we might, attempts to develop AI-detection technology are almost as sure to fail as a dog's attempt to catch its tail. Like Joshua, the WOPR supercomputer in War Games, it becomes an endless game of no-one-can-win-global-thermal nuclear war. The supercomputer engages in a self-crafted, self-administered learning exercise with incredible computational power and finally comes to a beneficent conclusion. We don't have to burn the world to the ground. Shall we play a game of chess?

  • › I begin contacting personal friends in various strata of society to ask them if I was crazy. Is an ethical solution like the VerifiedHuman model worth pursuing?

    › I invite 25 friends (and friends of theirs) to be advisors in developing this project. Some are still considering how they might contribute, others advise as needed, and a few are actively helping form a business plan and marketing strategy.

  • › Humans are a unique species. We are the only living things that use complex language.

    › And we are good at creating things that can kill us.

    › But we also work in ways only humans can, fueled by things transcending laws and technological advancements. 

    › These things are a part of the human spirit and are made evident by our behavior. These are the ethics, principles, moral compass, and values and our exercising of them in how we live.

    › We tell the truth because the truth matters. When we lie, others suffer harm, and we also suffer injury.

    › We believe the general human disposition is to raise what is right, what is just, and what is equitable.

    › It is on the strength of this human instinct–the value of truth–that we draw unquantifiable equity.

  • › Creating a broadly understood standard is an excellent place to start.

    › It must be straightforward–a statement a nine-year-old poet in Minnesota or a Harvard-educated solid-state physicist could easily understand and agree to.

    › The standard conveys the following. Hey, world, I made this. A machine didn't make this.

    › The standard must be unencumbered by legal jargon and a litany of definitions, so an accompanying pro forma needs to be developed providing:

    + A definition of terms in straightforward, transparent language

    + What the standard means

    + What the standard does not mean

    + FAQ: A set of shared assumptions and questions reasonable people would bring to the presentation of any ethic​

  • › Since we all use AI (unknowingly) in many ways, work must be done to delineate the difference between AI-ASSISTED and AI-CREATED. 

  • › We envision a future where students and teachers can cooperate and learn how to incorporate AI technology into the pedagogical (teaching/learning) experience.

    › In a world where teachers and students have access to the same powerful tools, to the same technology, and information, where students often learn about and become proficient with understanding new technology before their teachers, the academy has a vested interest in preserving and leveraging the teacher's lived-experience and calling as an educator.

    › Students and parents have a vested interest in students learning how to create original, inspired work, in learning how to read, write, think, create images, and compose music - even by utilizing the latest advances in technology to produce work at levels far beyond what was imaginable even one year ago.​

  • › Work must be done to develop an organizational ethic that says, We have to automate; we have to take advantage of the edge AI can give us in the industry, but we also need to protect human workers. Even if we have to change our company's fundamental structure, we will figure out how to remain competitive in the market, to produce more and better, but do so with as many people as possible.​

  • › I have said, "This isn't the civilization-ending part of the problem we are discussing. It's just the differentiation part." The two are more closely related than I initially thought.

    › It's all happening simultaneously. Human operators "tweak the dials" on how the content propagates in these AI brains, the neural networks that create these large language models. The learning loop in the networks is almost infinitely complex, self-generative, and unknowable. Networks learning to model language make a perfect example of a "stochastic" process, where predictable outcomes are possible. Still, the functions of arriving at these outcomes are obscured by incalculable randomness and seemingly infinite minute layers of complexity. The human-to-LLM interfaces can be filtered so that some data pass through and others are suppressed, which inherently lends to human bias.

    › One thing is sure - what Elon Musk and others fear is real: a system of information so vast and so collective that it is manipulable by humans but also manipulates by propagating and amalgamating vast quantities of language. And language–words–are the conduit that drives all human progress forward.

    › That's a lot.

    › One positive way to see this is AI's potential to drive the world toward human equity. That is, to rightly place value on the individual human contribution in a world controlled by influential people with privileged access to resources. If we succeed, we can shine a light on the single artist or the humble laborer.

    › In this way, "verified human" takes on much more significance than establishing a model for human-AI media differentiation.

    › The model becomes a global appreciation of intrinsic human value.​

  • › This model will only have value if the ethic behind it means something to the creators of the work.

    › It will be valuable if it has equity and credibility with the intended market or audience. 

    › As of this writing, we have yet to determine the economic structure of this venture.

    > One example of monetization, apart from an individual subscription model (licensing the use of the VerifiedHuman mark or name) would be to sell verification subscriptions to schools per teacher/student. A "gold standard" of verification could be offered, one that went beyond the notion of a creator's verification-by-self and included a proactive audit of student work in a given school year.

  • ​​› Laws can help support an ethic, but legislation historically lags.

    › It is not foreseeable whether our legislative system (in the US) will continue to lose pace with technological advancement (as it has for the past four decades since the proliferation of the home computer, the past three decades since the growth of the internet, and the past two decades since the rise of the smartphone).

    › Attorneys General are now very concerned with sub-par products being marketed and sold largely because of positive reviews written by AI, not by real people.

  • › Technology can assist the standard in detection and validation but has obvious limitations, such as rendering false positives or being evaded by detection scrubbing.

  • › Organizations like Fairtrade have created models worth exploring.

    › Compliance with standards can be motivated by subjection to simple verification, like someone calling a creator and asking, How'd you make this picture? When and where was it created? What kind of software are you using in your work? Or via more comprehensive auditing systems.​

    › If people accept and use the VerifiedHuman standard and it is discovered they do not comply with it, they could be subject to revocation of the right to use, public exposure, or even litigation in cases of gross misconduct.

    › Enforcement of the standard requires more attention. Enlisting the services of an auditing organization (like ISEAL, in cooperation with ETS, NASM, SACS, etc.) is in development now.

MAY 2023 – DEC 2023

› MORE TO FOLLOW...

​*MICAH VORARITSKUL

This narrative documents the development of VerifiedHuman®.

We hope the story continues.

bottom of page