Democratize AI : Event Series

By Caitlin Ballingall

Introduction

On March 8, I attended the launch of Democratize AI, an event series that now takes place on the second Thursday of each month at Work-Bench. For this first event in the series, the venue held a panel discussion that focused on “The Future Work” of AI. To be more specific, Artificial Intelligence in the workplace.

Panelists included:

Arthur Tisi – the CEO of MeaningBot.com. He has a background in engineering, computer science and marketing. Among Arthur’s critical goals is the focus on how people can use AI products, services, or solutions to solve real-world business and social problems and avoid seeing Artificial Intelligence as simply a shiny object.

Bill Marino – the co-founder and CEO of Uru, which uses computer vision and deep learning to help business and brands better understand and leverage all the video being created on the internet.

Oliver Christie – a consultant at Foxy Machine, who advises companies on the use of AI and business strategy to produce radical transformation.

Questions were posed to the panel such as;

  • How are artificial intelligence and automation transforming the way humans work?
  • What are the impacts on society and relationships in the workplace?
  • What will recruiters look for in an AI-first world?

Many themes emerged such as AI in the medical field acting as a second opinion for doctors and patients, AI eliminating bias in the hiring process and the jobs (if any) AI will replace in the future. For the purposes of this paper I will focus on the main themes the emerged from the event, and how these themes relate to the class discussion/readings, rather than naming the specific panelist’s opinions.

Definition of Artificial Intelligence

To kick off the discussion, presenters were asked if they could come to an agreed upon a shared definition of AI. This proved to be difficult but ideas around “pure AI,” “narrow AI” and AI cognition were among the main themes. Presenters did agree that AI is ready for a “pivot” as the technology and concepts that surround it seem to still be grounded in”legacy thinking.”      

Zooming in on the notion of narrow AI, defining this as AI that does not think like a human. To broaden AI, past legacy thinking, AI that only does one thing in a situation, developers should use the human mind as a model to create AI that embodies cognition, or can adapt to a situation.  Panelists mentioned how some people think AI can do this already, because of pop-culture’s manifestation of AI in television and movies, the word has become an alarmist term, and does not truly relate to what AI is or can be.

Thinking of AI as having human cognition is daunting, and viewing this idea through the lense of pop culture, which depicts this evolution as the beginning of the end, and apocalyptic, has its drawbacks. In our class, Black Mirror episodes were referenced and we discussed when AI has truly gone too far, and that is when it has become self-aware, or in Don Norman’s words “deceitful”. However, the truth is AI has been around for sometime, and hasn’t taken over the world yet, and is far from it.

During the panel, the speakers discussed that AI was developed in the 50’s and has failed a number of times. In Phoebe Sengers, Practice for Machine Culture, she makes reference to this in way of the cybernetics, which were small mobile robots, that moved around with little cognition. She claims these robots “fell out of fashion”, and research focused on cognitive abilities of AIs soon after. The hope was that one day the AI system would be merged with the robot or “cybernetic”.  Sengers states that this idea of merging the two concepts was indefinitely deferred for a time. The development of AI becoming more cognitive, tangible and more human like was also echoed by the panelists as a “far off” in the future idea. With some speakers expressing that we would not see this in our lifetime.

Human & AI Job Integration

With that, the conversation moved directions to what is in the near future, AI job integration. Humans working alongside AI to understand it and use it to their advantage, like it is already being done in the medical field. Doctors use it to transcribe patient interviews, speaking the words and allowing the computer to type them up, saving massive amounts of time. However, what is even more innovative is the use of AI as “decision support”.

In 2014, Sloan Kettering integrated IBM’s Watson into its patient diagnostic plan and process. Watson is an AI that has a large database of medical research documents. Doctors submit patient files to Watson, and within minutes the system formulates a medical opinion and test recommendations for the patient. Watson has become the “second opinion” for doctors and patients. Saving both parties time, by reducing the amount of tests the patient must endure, and money in medical expenses. Watson is also a system that works through machine learning, thus not only is the AI using uploaded medical research documents, but it also uses patient files to learn to see similar patterns for future patient diagnosis.

This is truly remarkable as doctors see countless patients and having to recall, compare and evaluate diagnosis can be difficult, and Watson as it is a machine, is able to recall everything, and furthermore, learn. In Don Norman’s, The Invisible Computer, Norman sympathizes with humans stating “People excel at qualitative considerations, machines at quantitative ones. As a result, for people, decisions are flexible because they follow qualitative as well as quantitative assessment, modified by special circumstances and context. For the machine, decisions are consistent, based upon quantitative evaluation of numerically specified, context-free variables. Which is to be preferred? Neither: we need both.” I think sums up where AI should be, decision support in the medical field as doctors and nurses are human beings and as much as they try they cannot remember every detail of every patient file to draw comparisons, and should not be faulted for this. Offloading this job to an AI as a second opinion is truly ideal.

AI Eliminating Bias

As one of the panelists companies works to eliminate bias in the job hiring process, this idea was discussed. Also, as it was International Women’s Day, I think the event hosts thought this would cover for the fact that there were no women on the panel, however there was a female moderator. Either way, talks focused around AI that could crawl resumes to find candidates for a specific job by looking at their experience first and not their name or gender to eliminate bias. When people annotate the job search there is deep bias, but with AI this is eliminated like a blind audition. AI now becomes an assessment tool for employers.

This idea like decision support sounded great, but the panel brought up that AI can be biased by people creating the AI. The idea of what makes a great candidate can be deeply subjective. With that, there has to be a plan in place to also eliminate bias in the creation phase. Panelists did not discuss how that would look.

Conclusion

Lastly, Panelists briefly discussed the relevant skill set people would need entering the AI job field. These suggestions focused on three ideas. (i) More machine learning engineers, which is relevant to Watson in the medical field. (ii) Productizing or AI, helping consumers know what the technology can do, and meet their expectations. Then Translating these technologies that work for people in their everyday work environments.  (iii) Companies are now hiring people with emotional intelligence over IQ. This trait is important for companies as the creative and emotional side of things will help companies advance in Tech and AI tech.

Overall, panelists agreed that AI’s purpose should come from “us”. It should be a human decision of where it goes. It is currently missing purpose. Norman also echoed this sentiment back in 1990 stating, “However, this is useful only if the machine adapts itself to human requirements. Alas, most of today’s machines, especially the computer, force people to use them on their terms, terms that are antithetical to the way people work and think.”   AI must have a purpose, as it is evolving, and we have a responsibility to define AI.

The following two tabs change content below.

Caitlin Ballingall

Leave a Reply

Your email address will not be published. Required fields are marked *

Creative Commons License
This work is licensed under a Creative Commons
Attribution-NonCommercial 3.0 Unported License
.

WordPress theme based on Esquire by Matthew Buchanan.