Democratize AI : Event Series

By Caitlin Ballingall


On March 8, I attended the launch of Democratize AI, an event series that now takes place on the second Thursday of each month at Work-Bench. For this first event in the series, the venue held a panel discussion that focused on “The Future Work” of AI. To be more specific, Artificial Intelligence in the workplace.

Panelists included:

Arthur Tisi – the CEO of He has a background in engineering, computer science and marketing. Among Arthur’s critical goals is the focus on how people can use AI products, services, or solutions to solve real-world business and social problems and avoid seeing Artificial Intelligence as simply a shiny object.

Bill Marino – the co-founder and CEO of Uru, which uses computer vision and deep learning to help business and brands better understand and leverage all the video being created on the internet.

Oliver Christie – a consultant at Foxy Machine, who advises companies on the use of AI and business strategy to produce radical transformation.

Questions were posed to the panel such as;

  • How are artificial intelligence and automation transforming the way humans work?
  • What are the impacts on society and relationships in the workplace?
  • What will recruiters look for in an AI-first world?

Many themes emerged such as AI in the medical field acting as a second opinion for doctors and patients, AI eliminating bias in the hiring process and the jobs (if any) AI will replace in the future. For the purposes of this paper I will focus on the main themes the emerged from the event, and how these themes relate to the class discussion/readings, rather than naming the specific panelist’s opinions.

Definition of Artificial Intelligence

To kick off the discussion, presenters were asked if they could come to an agreed upon a shared definition of AI. This proved to be difficult but ideas around “pure AI,” “narrow AI” and AI cognition were among the main themes. Presenters did agree that AI is ready for a “pivot” as the technology and concepts that surround it seem to still be grounded in”legacy thinking.”      

Zooming in on the notion of narrow AI, defining this as AI that does not think like a human. To broaden AI, past legacy thinking, AI that only does one thing in a situation, developers should use the human mind as a model to create AI that embodies cognition, or can adapt to a situation.  Panelists mentioned how some people think AI can do this already, because of pop-culture’s manifestation of AI in television and movies, the word has become an alarmist term, and does not truly relate to what AI is or can be.

Thinking of AI as having human cognition is daunting, and viewing this idea through the lense of pop culture, which depicts this evolution as the beginning of the end, and apocalyptic, has its drawbacks. In our class, Black Mirror episodes were referenced and we discussed when AI has truly gone too far, and that is when it has become self-aware, or in Don Norman’s words “deceitful”. However, the truth is AI has been around for sometime, and hasn’t taken over the world yet, and is far from it.

During the panel, the speakers discussed that AI was developed in the 50’s and has failed a number of times. In Phoebe Sengers, Practice for Machine Culture, she makes reference to this in way of the cybernetics, which were small mobile robots, that moved around with little cognition. She claims these robots “fell out of fashion”, and research focused on cognitive abilities of AIs soon after. The hope was that one day the AI system would be merged with the robot or “cybernetic”.  Sengers states that this idea of merging the two concepts was indefinitely deferred for a time. The development of AI becoming more cognitive, tangible and more human like was also echoed by the panelists as a “far off” in the future idea. With some speakers expressing that we would not see this in our lifetime.

Human & AI Job Integration

With that, the conversation moved directions to what is in the near future, AI job integration. Humans working alongside AI to understand it and use it to their advantage, like it is already being done in the medical field. Doctors use it to transcribe patient interviews, speaking the words and allowing the computer to type them up, saving massive amounts of time. However, what is even more innovative is the use of AI as “decision support”.

In 2014, Sloan Kettering integrated IBM’s Watson into its patient diagnostic plan and process. Watson is an AI that has a large database of medical research documents. Doctors submit patient files to Watson, and within minutes the system formulates a medical opinion and test recommendations for the patient. Watson has become the “second opinion” for doctors and patients. Saving both parties time, by reducing the amount of tests the patient must endure, and money in medical expenses. Watson is also a system that works through machine learning, thus not only is the AI using uploaded medical research documents, but it also uses patient files to learn to see similar patterns for future patient diagnosis.

This is truly remarkable as doctors see countless patients and having to recall, compare and evaluate diagnosis can be difficult, and Watson as it is a machine, is able to recall everything, and furthermore, learn. In Don Norman’s, The Invisible Computer, Norman sympathizes with humans stating “People excel at qualitative considerations, machines at quantitative ones. As a result, for people, decisions are flexible because they follow qualitative as well as quantitative assessment, modified by special circumstances and context. For the machine, decisions are consistent, based upon quantitative evaluation of numerically specified, context-free variables. Which is to be preferred? Neither: we need both.” I think sums up where AI should be, decision support in the medical field as doctors and nurses are human beings and as much as they try they cannot remember every detail of every patient file to draw comparisons, and should not be faulted for this. Offloading this job to an AI as a second opinion is truly ideal.

AI Eliminating Bias

As one of the panelists companies works to eliminate bias in the job hiring process, this idea was discussed. Also, as it was International Women’s Day, I think the event hosts thought this would cover for the fact that there were no women on the panel, however there was a female moderator. Either way, talks focused around AI that could crawl resumes to find candidates for a specific job by looking at their experience first and not their name or gender to eliminate bias. When people annotate the job search there is deep bias, but with AI this is eliminated like a blind audition. AI now becomes an assessment tool for employers.

This idea like decision support sounded great, but the panel brought up that AI can be biased by people creating the AI. The idea of what makes a great candidate can be deeply subjective. With that, there has to be a plan in place to also eliminate bias in the creation phase. Panelists did not discuss how that would look.


Lastly, Panelists briefly discussed the relevant skill set people would need entering the AI job field. These suggestions focused on three ideas. (i) More machine learning engineers, which is relevant to Watson in the medical field. (ii) Productizing or AI, helping consumers know what the technology can do, and meet their expectations. Then Translating these technologies that work for people in their everyday work environments.  (iii) Companies are now hiring people with emotional intelligence over IQ. This trait is important for companies as the creative and emotional side of things will help companies advance in Tech and AI tech.

Overall, panelists agreed that AI’s purpose should come from “us”. It should be a human decision of where it goes. It is currently missing purpose. Norman also echoed this sentiment back in 1990 stating, “However, this is useful only if the machine adapts itself to human requirements. Alas, most of today’s machines, especially the computer, force people to use them on their terms, terms that are antithetical to the way people work and think.”   AI must have a purpose, as it is evolving, and we have a responsibility to define AI.

When Museum’s Start Selling Their Collections What Does That Mean For Preservation

By Caitlin Ballingall

While reading Michele Valerie Cloonan’s article, W(H)ITHER Preservation? (2001), the author makes reference to a public opinion survey conducted by the Conservation of Cultural Property. The survey found that Ninety-five percent of the adults who were polled either strongly agreed or agreed with the statement, “The collections in our nation’s museums, libraries and historic houses need to be preserved”(Gallup Organization 1996). But at what cost? This statement reminded me of an article I read last year about the Berkshire Museum, in which the Museum was looking to better serve its community and visitors by become a new “innovative 21st-century institute,” with a focus on history and science. In order to fulfill this plan the Museum would auction 40 artworks from its collection of 2,400. This sale was projected to bring in  50 million dollars to support the Museum’s 20 million dollar “face-lift” and increase its endowment.


This plan has been met with a large amount of opposition from two prominent organizations, American Alliance of Museums and Association of Art Museum Directors’. The organizations even issued a joint statement,  saying: “One of the most fundamental and longstanding principles of the museum field is that a collection is held in the public trust and must not be treated as a disposable financial asset.” I think this raises a greater issue of are archives limiting institutions? What happens when an institution wants to change its mission, and to do so it must purge parts of its collection to make room for change and growth.


One issue, facing the Berkshire Museum is that they made a promise to Norman Rockwell that his work would be maintained in the permanent collection.  Which brings up the point of artists’ rights in this matter. If an artist sells or donates a painting to an institution specifically because they want their work on display, and cared for by that institution, is it ethical to sell the work? In some cases the popularity of the artist and their work comes into play when making these decisions.


Such as in the case of Richard Serra’s short lived public work Tilted Arc. The large curved wall was installed in  Federal Plaza, NYC in 1981. The public found the wall to be displeasing and a monstrosity.  There were plans to have the work reinstalled in a different more “convenient” location, but in this case the artist voiced his opinion.  Serra claimed the art was made for that space and to remove it would destroy it, and so it was. Eight years after it was installed the wall was split into three pieces and taken to a scrap yard.     


Differentially, in the case of moving and selling Norman Rockwell’s artwork, the work does not have a specific tie to the Museum they are housed. The paintings were not created for the Museum, and moving them would not change the meaning or integrity of the art. I also think this is a bold move on the part of the Berkshire Museum to choose Norman Rockwell’s work, as the artist is considered an American icon, and most museums would covet this work. This also sends a message that regardless of artist popularity, all work should be viewed equally to assess how it fits into the overall mission of the institution.


A second issue is that Museums, excluding Smithsonians,  must make a  revenue in order to open its doors and turn on the lights each day.  One way Museums do this is by rotating and creating new exhibitions with its existing collection and other works outside of the collection, to draw in more visitors.  In the late 1960’s to the early 1970’s Thomas Hoving, was the Metropolitan Museum of Modern Art’s head curator. Hoving is credited with creating “Museum Blockbusters” and one of the most iconic areas of the Met, the Temple of Dendur. He is also known for selling some of the Met’s permanent collection to private dealers. Artworks like Van Gogh’s “Olive Pickers” and Rousseau’s “Tropics”.  Selling artworks like these helped to fund such exhibits like the Temple of Dendur which is a huge part of the Met’s appeal to visitors today.


However, in 1973, the Met entered into a period of full transparency with an emphasis on “more public disclosure” by creating the “Report of Transactions”. It seems under Hoving’s leadership 32 paintings were sold to private art dealers. These 32 paintings were given to the Museum by a wealthy aristocrat that wished for the paintings to be housed within the Met’s collection or placed within other institutions. The Met claims the Report of Transactions had little to do with the sale of these 32 paintings, and more to do with creating a plan and transparent procedure for deaccessioning art. Similarly in 1983, the Brooklyn Museum handled a similar matter in which the Attorney General filed a case, a case that helped set the standards for behaviors of museum officials.  


Thus, what are the codes of ethics deaccessioning art? According to the New York Times article written about the Berkshire Museum expansion: “ The American Alliance of Museums’ code of ethics says that proceeds from the sale of collections shall not “be used for anything other than acquisition or direct care of collections.” The Association of Art Museum Directors’ code includes an even narrower definition of when sales are permissible, stating: “A museum director shall not dispose of accessioned works of art in order to provide funds for purposes other than acquisitions of works of art for the collection.” So is the Berkshire Museum in breach of these codes of ethics if they do expand?


The Berkshire Museum would like to sell painting in order to fund the expansion of their Museum. The expansion will cater to a mission that is history and science based and fall in line with the popular buzzword/statement “innovative 21st century institute”. When I read that statement it did give me pause. I have found that many institution are using this “buzz statement” as it is appealing to grant applications and outside funders. Therefore it is worrisome that the Berkshire Museum could be selling off its collections in order to cater to an agenda or worse a possible passing fad.    


Furthermore, with such a large push for school education practices’ to be STEM (Science Technology Engineering and Math) based, it is inevitable that art will get lost, as that is just not where organizations are interested in placing their funds. By selling off the Rockwells as well as 40 other artworks, I think this becomes a larger issue of the message deaccessioning one genre of a collection sends to the public as a whole about cultural heritage.



 Cloonan. V. Michele. “W(H)ITHER Preservation?” The Library Quarterly, Vol. 71, No. 2 (Apr., 2001), pp. 231-242.

Moynihan, Colin. (2017, July 25). Berkshire Museum Planned Sale of Art Draws Opposition.


Brenson, Michael. (1983, March 18). Art People; Accord Ends Ethics Dispute.


Van Gelder, Lawrence (1973, June 27). 1971-73 Deals Studied.


(1981) Richard Serra’s Tilted Arc


Moynihan, Colin. (2018, February 9). Massachusetts Agrees to Allow Berkshire Museum to Sell Its Art


Finkel, Jori. (2009,January 1). When, if ever, can museums sell their works?.


Kachka. Boris. (2017, December 6). The Director and the Pharaoh: How Thomas Hoving Created the Museum Blockbuster When King Tut became a celebrity.

Creative Commons License
This work is licensed under a Creative Commons
Attribution-NonCommercial 3.0 Unported License

WordPress theme based on Esquire by Matthew Buchanan.