On Friday October 20th I attended the 2017 SLA NY chapter’s conference and expo at Baruch College in New York City. This year’s theme was Resilience: Navigating the Information Landscape. They had a lineup of fantastic speakers from all areas of information professions, all of whom have been resilient and successful in their careers in the ever-changing information world. The conference hosted two keynote speakers, and eight panels throughout the day. Although each panel held noteworthy speakers with much insight on career paths and the growth of information professionals, this article will focus on the two keynote speakers and a panel on fake news and how to combat misinformation.
The conference kicked off with our morning keynote speaker, CEO of Sterling Talent Solutions, Clare Hart, talking about opportunity, resilience, and success. Hart started by congratulating the audience, most of which have obtained or are in the process of obtaining a master’s degree, a degree held by less than ten percent of adults over the age of 25 in the United States. A Master’s of Library and Information Science can open up many opportunities to individuals seeking professions in libraries, information, data, and increasingly in the technology fields. There is an opinion by some that there will be less of a need for information professionals in the future, as more information is accessible to the public. Hart retaliates by discussing just a few information/data jobs that will be seeing a growth in the near future, such as: Market Research Analyst, Data Scientist, Operations Research Analysis, and Data Governance/Data Domain Curator. At a time of societal change, where some doors close others will open, and there will always be a need for information professionals in the digital shift. We are in an age of acceleration, and “we can’t escape these accelerations. We have to dive into them, take advantage of their energy and flows where possible, move with them, use them to learn faster, design smarter, and collaborate deeper.” Hart addresses the need to retool our educational systems to maximize skills and attributes that will prepare people with top employability skills, placing an emphasis on the communication and planning skillsets. Referencing her own transitional stages between jobs, Hart gives us a list of things to write down during times of transition: write down what you did well, what you could’ve done better or differently, where you want to go/what kind of company or organization you want to work for, research that job and write down the top choices while weighing the pros and cons. These lists will serve as lessons in self-awareness, one of the major components of being a successful and resilient information professional. It’s important to remember that we are all unique, and with a master’s in LIS we already have or are working towards “fabulous credentials,” that will help us take advantage of the many opportunities that are being created in the market today. We must remember that it’s our attitude that will help us get the most out of lifelong learning, and that being self-aware will lead to positive transitions. Hart concluded her presentation with those key takeaways to carry with us into our future professions.
With the rise of social media and the growing access of information and opinions to the public, fake news has become a buzzword. One of the morning panels, In-Credible Sources: Fake News in the Info Age, addresses the fake news of today and how journalists and researchers can combat it. The panel was hosted by Brandy Zadrozny and Kathy Deveny. Zadrozny is a researcher and reporter for the Daily Beast, and previously worked as a news librarian at ABC News and Fox News. Deveny is a managing director at Kekst, a leading strategic, corporate, and financial communications firm, and previously worked as an editor and writer, including deputy editor of Newsweek. Both speakers focused on the topic of misinformation in an overly-saturated information society. News outlets like Fox employed news librarians to help with research and fact-checking, but now with the cutting of many news librarian jobs, the reporters are in charge of doing their own research and fact-checking. Zadrozny points out that many people are just simply over fact-checks and will continue to read what they want to read. A popular culprit of misinformation is what is known as the “sexy press releases.” In science and health research, scientists need funding, and they get funding by writing articles. The articles are boosted by press releases that feed the public snippets of facts that are easily misconstrued into untruths. Deveny talks about fake news in the corporate realm. A lot of fake news on corporations comes from competitors causing chaos. A company’s reputation is more in danger now than ever as people take to social media and the emergence of online boycotts. How do we combat fake news in the media when people just aren’t skeptical enough? This is the challenge for information professionals like Zadrozny and Deveny, how do we teach people to be more skeptical? A short answer lies somewhere in feeding the public what they want, turning fact-checks into something easily digestible by telling a story in a fun and entertaining way. To combat fake news and get beneath the lies and untruths of misinformation, we need to follow through on skepticism, going with our gut feelings. If a story doesn’t sound right, we need to get interested in checking things out, we need to start poking holes. By contradicting things and being more skeptical, we create controversy and garner more attention. Journalists and reporters can also start by putting out factual videos to combat fake news. But as people are flooded with an abundance of information through Twitter, Facebook, and dark areas of the internet, the challenge on how to get people to this level of skepticism still remains.
The challenges that information professionals are facing, how to bridge the gap between information producers and information consumers, how to teach people to be more skeptical, or how to make our transitions into the growing technological society positive ones, are hard issues. Our afternoon keynote speaker Cynthia Cheng Correia, author, adjunct professor at Simmons College, and member of the Council of Competitive Intelligence Fellows, gave us some thoughts on how to address these challenges in her presentation, How Strong is your Professional Resilience? Correia teaches her students about competitive intelligence and how to prepare for the newer information landscape with tools to look at indicators in foresight. The questions she aims to tackle revolve around building a collective resilience. By combining professional preparation and personal resilience, we can create professional resilience in mindset and perspective. Resilience means being able to persist and adapt in the face of changes. It’s not just about fixing problems, but learning from them. In regard to the changes in information tools, Correia states that we need to keep up, that we need to think about how to respond, and that we need to anticipate. So how do we do this? We start with ourselves, our own independent resilience. It’s important to foster healthy relationships, through friends, family, colleagues, mentorships, and professional networks. These will give us a foundation of support, validation, empathy, and perspective. We also must be able to make a plan, this will aid in focus, problem-solving, foresight, and compartmentalizing. While making plans we must also remember to be positive in outlook (adjusting our approach to the problem at hand), realism (looking at the situation as it is), and a constant forward-looking attitude. Correia further goes into the importance of fostering our passions and growing with them, stating that the growth process isn’t about overcoming, it’s about enjoying the process and learning from it. She addresses the barriers to resilience: denial, habits, biases, silos (walls that we put up as “safe spaces”), behaviors, fear of failure, lack of self-awareness, and life’s challenges. These barriers aren’t just in individuals but in whole professions as well. In order to keep up with the changing information landscape, we must move past these barriers, strengthening our resilience, and re-evaluating our educational system to provide the proper tools for successfully navigation an information-heavy society.
The world of information is constantly changing and technologies are providing new ways of accessing and analyzing data and information. The information professions must keep up with these changes, following the latest news and trends in technology and building resilience in the workplace. This year’s SLA conference addressed some of these issues we are facing in current shifts, and provided stepping stones as to how to navigate them.
Donald Norman, writer and computer science researcher, has emphasized throughout his writings that the key to good design is understandability. But what does understandability mean when talking about the complex technology people are increasingly relying on? In his 1998 publication The Invisible Computer, Norman criticizes the complexity of technology and states that “the dilemma facing us is the horrible mismatch between requirements of these human-built machines and human capabilities. Machines are mechanical, we are biological.” Humans are creative and flexible, we interpret the world around us (often with very little information). We deal with approximations, not the accuracy of computers, and we are prone to error. Norman at first seems to be pitting technology and humans against each other, technology (digital) is precise and rigid, whereas humans (analog) are flexible and adaptive, and raises the question of who/what should dominate, the human or the machine? His conclusion is that computers and humans work well together, that we complement each other, but that we need to move away from the current machine-centered view, and more towards a human-centered approach, making technology more flexible to human requirements. Now almost twenty years later, we are witnessing an evolution in computers, the rise of Artificial Intelligence. To make way for A.I., large internet companies are starting to look towards human biology in the new makeup of computers. In his writing Norman calls on strategies to make the relationship between humans and computers a more cooperative one. Does the current technological evolution mean that those calls have been answered?
Norman compares the computer and the human brain: computers are constructed to perform reliably, accurately, and consistently, all within one main machine, whereas the human brain is much more complex in its computations through the workings of vast amounts of neurons. Outlined in a recent article, “Chips Off the Old Block: Computers Are Taking Design Cues From Human Brains” by Cade Metz, tech companies are starting to realize that due to the decline of Moore’s law, progress is no longer about upgrading the current traditional “single, do-it-all chip – the central processing unit”, it’s about needing more computing power, needing more computers. Traditional chips cannot handle the massive amounts of data to accommodate new technological research like that of A.I. As a new method, specialized chips are being created to work alongside the C.P.U., offloading some of the computing power to various smaller chips. Spreading the work across many tiny chips, makes the machine similar to the brain in energy-efficiency.
In 2011, a top Google engineer, Jeff Dean, started exploring the concept of neural networks, “computer algorithms that could learn tasks on their own” (Metz, C.), which elevated A.I. research in voice, image, and face recognition. These neural networks are similar to our own human abilities to make sense of the world and our surroundings in order to decide what information to attend to and what to ignore. The December 2016 article by Gideon Lewis-Kraus, “The Great A.I. Awakening” details in great depth the trial and error phase in training a neural network. The neural network learns to differentiate between things such as cats, dogs, and various inanimate objects, but all while being supervised by the programmer/researcher who will often correct the machine until it starts producing the proper responses. Once a neural network is trained it can potentially recognize spoken words or faces with more accuracy than the average human. Google Brain, the department that first started working on A.I. within the company, developed the neural network training under the notion that the machine might “develop something like human flexibility” (Lewis-Kraus, G.).
In his argument of humans versus computers, Norman talks about the evolution of human language, and that “communication relies heavily upon a shared knowledge base, intentions, and goals,” resulting in a “marvelously complex structure for social interaction and communication.” But what if a machine could grasp language? A good example of machine evolution in language can be seen with Google Translate. Up until it’s major update in November 2016, Google Translate was only useful in translating basic dictionary definitions between languages. Whole sentences or passages from books would lose their meaning as the words were translated separately and not in the context of the entire passage. But once Google applied its neural network research to Google Translate, the service radically improved overnight. At a conference in London introducing the newer improved machine-translation service, Google’s chief executive, Sundar Pichai, provided this example:
In London, the slide on the monitors behind him flicked to a Borges quote: “Uno no es lo que es por lo que escribe, sino por lo que ha leído.”
Grinning, Pichai read aloud an awkward English version of the sentence that had been rendered by the old Translate system: “One is not what is for what he writes, but for what he has read.”
To the right of that was a new A.I.-rendered version: “You are not what you write, but what you have read.” (Lewis-Kraus, G.)
With the A.I. system, Translate’s overnight improvements were “roughly equal” to the total improvements made throughout its entire previous existence.
I would argue that the development of A.I. is taking a more human-centered approach to computers than has ever been seen. The method of using neural networks comes straight from one of our greatest human abilities, which is to learn. A machine that can learn on its own is flexible, adapting to its environment. Norman brings up two different themes in human-computer relationships: one is that of which he believes society to be in (at the time of publication in 1998), the theme of making people more like technology. The other theme is “the original dream behind classical Artificial Intelligence: to simulate human intelligence,” making technology more like people. I believe we are at the gates of that dream of Artificial Intelligence, but instead of trying to make one more like the other, humans and computers are taking an approach that builds off each other’s strengths, through computer logic and human flexibility.
Norman, D. A. (1998). The Invisible Computer: Why Good Products Can Fail, the Personal Computer is So Complex, and Information Appliances are the Solution. MIT Press. Chapter 7: Being Analog http://www.jnd.org/dn.mss/being_analog.html.
Metz, C. (2017). “Chips Off the Old Block: Computers Are Taking Design Cues From Human Brains.” The New York Times, https://www.nytimes.com/2017/09/16/technology/chips-off-the-old-block-computers-are-taking-design-cues-from-human-brains.html?_r=0. September 25, 2017.
Lewis-Kraus, G. (2016). “The Great A.I. Awakening.” The New York Times, https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html. September 25, 2017.