We’ve been hearing a lot about AI recently, with regard to automated essay-writing, drawing and painting, and so on. Of course, none of it is AI; it’s machine learning with snappy marketing. True AI remains fictional, and will be for quite some time. But the distinction ultimately doesn’t matter.
All technologies have a problem in common, and we don’t have to look beyond the nearest mirror to find it. Oh not you, esteemed reader, but people in general. We look for shortcuts, especially in our thinking and other types of work, and we’re willing to turn a blind eye to almost anything if we can make our lives easier.
We don’t check our sources. We don’t verify, or think critically. We cut every cognitive corner we can. Fake AI is certainly a threat in that regard, and it’s the same threat as fake news, and conspiracy theories, and other forms of manipulation which work because we always prefer a vaguely plausible quick answer to a laboriously-obtained correct one.
Will people use these fancy ML systems to cheat in academia, and at work, and when applying for jobs, and in doing every possible thing that the systems can be used for? Certainly. Absolutely. It could never be otherwise. We all know it.
I do tend to think that if your chosen test of a person has been automated, then the test itself is (and always was) insufficient — for example, I think the age of essay-writing for assessment purposes should probably draw to a close. But that’s an aside, which ignores the larger ethical angle of the technologies involved.
Can these systems be enormously useful, obviating many kinds of busywork, providing new access to forms of art, and so on? Of course. It would be irrational to claim otherwise, notwithstanding the eternal debate regarding what art is. There is good to be found in almost anything, too. But in real life, the scales are rarely balanced.
If you’re working in fake AI and you’re even allowing it to be misrepresented, including by calling it AI and thus giving it an air of entirely unearned correctness and authority and dependability, then you’re attacking all of us. You’re a facilitator of our self-destruction, handing ever more powerful weapons to irresponsible children. Just as with any technology, like the automobile, or the aeroplane, or the computer.
I wouldn’t wish to deprive anyone of their computer-generated paintings, or their customer support chatbots, or their assistive text-cleanup, and all the rest. But let’s not pretend we can have those things without also having to deal with manufactured homework, and deception in job applications, and fake news, and fabricated photographic evidence, and so on.
The two sides are inseparable, which means that engineering and ethical responsibility are inseparable. That’s another lesson we’ve yet to internalise.
The problem with AI is humans.