AI Use: Your Thoughts & Feelings

Polling all colleagues and friends! Calling you here to serve up the results of the questionnaire you so kindly responded to about the one, nay—the many, data-slayer: artificial intelligence (AI). (And even more specifically generative AI.)

Impetus

The original impetus for this poll was a panel that I participated in at the Healthcare Internet Conference, first week of November. I knew that I was not particularly enamored of current generative AI tools, so I sought input from the many, whose opinions are often a better representation of the public-at-large rather than my own editor-at-medium professions (as in -sions that were professed).*

I also needed input as most of my own job functions are not those that AI comes in particular use for, but more about that later. Let’s break down the deets.

NOTE: Unless stated otherwise, this blog was written purely by—me—Jennifer Brass Jenkins, and is owned by—me—Jennifer Brass Jenkins. Be assured, however, that no AI was harmed in the making of it.

Questions, Answers, Predictions, & Hypotheses

The poll consisted of 10 questions (I didn’t realize how nicely that number came out). Some Qs asked for more clarity, some asked for demographic information, and most asked for open-ended answers, in order to tap y’all’s hidden genius.

Out of about 80 recipients, I received 39 responses. (YAASSS–WAY TO SHOW UP FOR THE TEAM Y’ALL!!) Though not all answers were required and thus answered by all respondents, all questions had at least 30 respondents.

The first questions were basic:

  1. Do you use AI in your work?
  2. If so, how often?

Fig 1: Pie chart showing the percentage of users who use AI tools in their current work (71.8%) vs. those who do not (28.2%).

Fig 2: Pie chart showing how often those who use generative AI apply it with answer choices of daily, weekly, monthly, as needed, and a must-I option. (Note that two people chose “Do I really have to?”, which was one of my favorite answers.)

Out of the 39, 72% said they use generative AI in their current work. Of those, 18.8% said they use it daily. I’mma’ go out on a limb here and say that I think our web developers probably use it the most, aka daily.

Read this pithy commentary by our lead developer on how he uses and values AI. (TL;DR: He loves it and uses it for troubleshooting and ideation at work and home.)

Image 1: Chat screen with lead developer Mark Thomas (at U of U Health) about his experience with new generative AI tools.

For those who have used it, but perhaps have not incorporated it into their own processes more regularly, I added the answer “as needed”. (It’s hard to work something into your workflow that you don’t regularly need :D.)

Prediction: I believe that a year from now, our answers to this poll will not be significantly different. I do think, however, that we will have better incorporated gen AI into our processes.

AI Use at U of U Health

The next question focused on AI at our institution. The question about where AI is being used allowed users to add any reply. There were three main categories into which the answers could be classed:

  1. Generative AI tool use specifically
  2. Institution services/department references (some more specific than others)
  3. I dunno’ responses

With 34 responses, here are the cited uses of AI that our respondents are aware of at U of U Health:

Generative AI

  • Photoshop (for use in image editing)
  • Utah Magazine (not sure how but with development–I think giving/donor development)
  • Video editing and advertising
  • Idea generation
  • Epic (the electronic medical record system used by the institution)
  • Writing/rewriting/refining
    • Emails
    • Interview outlines (rough drafts)
    • Podcast descriptions (rough drafts)
    • Meeting notes
    • Letters of recommendation
    • Social media posts
  • Article or book summarization for learning purposes
  • Stock image/illustration search
  • Troubleshooting (I assume in context of web development)

Institution Services/Departments

  • Radiology services/department image reading
  • Biomedical Informatics department
  • Identifying “critically ill newborns who are best candidates for rapid whole genome sequencing and using that to guide care for these newborns. We are doing this work in collaboration with Rady Children’s (https://pubmed.ncbi.nlm.nih.gov/36927505/).”
  • Lung cancer predictive screening: “We have used predictive models on lung cancer risk to help improve screening for lung cancer, the leading cause of cancer deaths. We have increased the odds of screening for lung cancer at U of U Health by 5-fold (https://pubmed.ncbi.nlm.nih.gov/37142092/).”

Obviously, if a respondent worked with a specific program/department/study using an AI tool, they were able to identify more specific use cases. 

Hypothesis: Many entities/programs are experimenting with the use of AI at U of U Health and this will continue for the next several years.

Time/Money Savings

The next question focused on savings in time and money. I did not structure the question here so that a respondent could choose multiple answers. Fortunately, I did add an “other” option, in which the respondents let me know of this misstep. 

Due to that, I have restructured the original data-generated poll graph to better categorize all answers:

Fig 3: Answer categories (and number of answers) of potential time or money saving uses.

Please note that the answer “Other” consists of the following:

Other

  • Information Consolidation
  • Research
  • Brainstorming
  • Email Responses
  • Social Promotion
  • Longer Term Use
  • Audio

Greatest Potential Misuses

Question 5 asked users to choose (multiple choice) what they thought the greatest potential misuse of AI could be.

Fig 4: Line chart showing most selected answers regarding the greatest potential misuses or negative effects of AI.

Tied as the most selected were these two answers:

  1. Spreading misinformation
  2. Confusing intellectual copyright

Data collection and privacy ranked as the third potential highest area of misuse.

I also inserted a more qualitative, open-answer question asking users how they felt about AI and the future of work. Many thought it was great but has unachieved potential. Some worried about keeping up with changes in AI tools and applying them to their works.

Summary: Most respondents worried less, however, about whether we should be using these tools and more about regulation, quality of work produced, and costs.

Identifying AI Use Elsewhere

For Question 7, I wanted to see if respondents could or had identified the use of AI by other entities. Out of the 30 respondents to this question, some answers were pretty generic or just no/don’t know. 

Here are some of the more in-depth answers (categorized for easier analysis).

Obvious AI Use

(How can we joint this group?)

Less Obvious & Generalized AI Use/Hearsay

Summary: These answers confirmed what we all see, or don’t see: Everyone is trying out the tool or has been using it and we may or may not be able to guess. (Unless you are as good as the person who noticed that some newsletters have no soul…I feel you.)

Best Uses for AI

Next question: What are the best uses for AI? I’ve put these answers in a graph by categorization (by use case–and note that some respondent answers identified multiple uses).

Fig 5: Answer categories (and number of answers) around best uses of AI.

Here is the full list of responses (summarized):

Research

  • Research (including time saving)
    • Social media influencer/hashtag research
  • Topic exploration

Ideation

  • Brainstorming
  • Teacher/Intern/Sounding board
  • Higher-level thinking (INTRIGUING)
  • Design

Content Creation

  • Outlines
  • Thesaurus
  • Repurposing
  • Headlines
  • Captioning
  • Persona/brand extension (specific example: deceased artist covers–INTRIGUING OR DISTURBING?)
  • Photos
  • Illustrations

Analysis: Data & Other

  • Analysis
    • User sentiment
    • Arguments/presentations
  • Error reductions in the analysis of large datasets
  • Large text set analysis
  • QA (quality assessment)
  • Summarization
  • Information display

Functionality/Toolset

  • Conduct repetitive tasks
  • Enhanced tools
  • Chatbots
  • Enhance/complete
  • Speed up tasks
  • Note-taking
  • Copyediting
  • Productivity

Summary: These responses, I believe, confirm our own experiences. (Since we all could potentially be “experts” in AI use.)

Worst Uses for AI

For this question we had 34 respondents. Again note that some respondent answers identified multiple uses. See the answers, again categorized and then the full list:

Fig 6: Answer categories (and number of answers) around worst uses of AI.

Fact-Checking, Truth, and Editing

  • Use of AI as the final source of truth
  • Data verification/fact checking
  • Final drafts
  • Fake references

Classroom Work/Learning

  • Student use for classroom work
  • Inhibits creativity or skills

Process Impedimentation

  • Continual rewrites when you aren’t getting the rewrites you want from prompts
  • As the only tool

Misinformation

  • Misleading content
  • Use on news and government platforms
  • Intentional misinformation
  • Provider notes

Replacing the Hooomans

  • Replacing human thoughts and ideas
  • Replacing human-created work with lower quality work
  • Making bread–my favorite answer! And companies in San Francisco at least are experimenting with replacing humans in food service.
  • Replacing jobs
  • Takeover of content production

Copyright Infringement

  • Generating content without thought for copyright

Respondents

The final question, again not required, asked for the main identity of each respondent primarily so we could see what disciplines were represented in our survey.

Fig 7: Respondents classified by department, program, or entity.

Note that Departments/Programs include the following:

  • OPMO: Project Management Office
  • Service Line Director (Dermatology)
  • IT
  • UUMG: University of Utah Medical Group

Thank you to all those who participated. Your input was greatly appreciated!!

The Whole Enchilada

So, that’s quite a lot to digest. 

If I were to say one thing that you should remember, it would be this: be cautious about the tools you use and what data is going where.

Note that any information you enter that is proprietary for your work, such as meeting summaries or email rough drafts, is used by open AI (such as ChatGPT) to continue training model.^

If you opt for a paid subscription model (which we all will have to eventually) and want to create something proprietary, consider the work it will take to customize this and if you might potentially switch tools (time investment vs. time the tool saves/contributes value).

In the end, nothing has really changed. It still goes back to time, money, and effort and how to make the best of the tools we have at hand.

Originally published on Pulse, the U of U Health Intranet, Dec 14, 2023

*Editor-at-large is a publishing title used in print, now often digital, publications. It refers specifically to an editor who writes on no one specific topic of specialty, but reviews trends and industry shifts.

My favorite editor-at-large of all time was ALS or Andre Leon Talley for the uninitiated. Both his perspective, as a Black American in fashion, and his self-deprecating take on fashion were unique and fantabulous.

^Source: Weighing the Open-Source, Hybrid Option for Adopting Generative AI, Harvard Business Review

Content Creation & Bias

Recognizing Our Own Cognitive Biases as Content Creators

Note: In this post I mention the phenomenon of “fake news”. Please note that this is to give contextual reference to ideas in this post, and I apologize beforehand to those of you who feel as thoroughly sick of the phrase “fake news” as I do.

Fig. 1: Bias <noun> prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair

As a community, whether we wanted to or not, we were recently brought face to face with news about ourselves we didn’t want to hear (and may still deny). We are inherently biased in our viewpoints and opinions.

For some people this is not a shocker. For most of us, the fake news phenomenon that hit rock bottom with the recent presidential election was a wake-up call whether we wanted it or not. It was a moment to look in our social media mirror and see writing on that mirror that pointed out the flaws in our reflection. (So very meta!)

Self reflection: several mirrors

I use this metaphor to say that the content we chose to consume, on whatever internet channel we frequented, was implicitly biased to match our own viewpoints, and we were forced to recognize this. (Which totally sucked. #amiright?)

We are biased in the content we consume. How biased are we in the content we create?

As we are so biased in the content we consume, how biased then are we in the content we create? And how do we combat that bias or at least become aware of it in our work?

Assumptions We Make Personally

I have always been aware of the concept of bias. Growing up in a household governed by strong and divergent views of our reality, this was readily apparent to me, and as a fairly judgmental teenager it was easy for me to see the biases of the authors I read, the teachers I listened to, and the views of the people around me.*

This led me to a deep skepticism of journalism. Not journalism in and of itself, which is incredibly detail oriented and well meaning, but of the belief that true journalism is unbiased and objective. This struck me as ludicrous. Anything a human individual writes is biased; therefore, anything we write as content creators will reflect our own personal biases, whether intentional or not.

Entering the world of content creation, I assumed that others had this same viewpoint. It was no challenge for me then to become a content creator for a brand/company/institution, since I assumed that my readers would be as educated as I was and be able to discern somewhat the biases inherent in the content I wrote.

The recent, unfortunate, phenomenon summed up by the phrase “fake news” reminded me that many, many people do not have the education or privileges of my own background, and that prodded me to be more aware of my own biases. Imagine how fascinated I was then to discover that there are multiple defined types of bias, called more specifically cognitive biases.^

Cognitive Bias: The Types of Bias & How They Work

Fig 2: Cognitive bias: deviation from rational judgement

There are, apparently, over 200 varieties of cognitive bias! Can you believe that? Here are three prevalent types of cognitive bias:

  1. Optimism bias: A bias that causes someone to believe that they themselves are less likely to experience a negative event (also known as unrealistic or comparative optimism).**
  2. Confirmation bias: The tendency to search for, interpret, favor, and recall information in a way that confirms one’s preexisting beliefs or hypotheses (also called confirmatory bias or myside bias).
  3. Normalcy bias: A bias that causes people to underestimate both the likelihood of a disaster and its possible effects, because people believe that things will always function the way things normally have functioned (or normality bias).

Our Biases as Content Creators

In the ‘verse of content, those three biases might be defined something like this:

  1. Optimism bias: Creating content, any content, will surely be good because content marketing is a cure all (and my client wants it), or maybe, we have a content strategy and while it might not be documented or well defined, it’s probably effective.
  2. Confirmation bias: The content we are creating is valuable to my company, brand, industry, or subject matter experts; therefore, it will be valuable to my audience. The way we approach our content from a company viewpoint is the same way our audience approaches it.
  3. Normalcy bias: The type of content I am creating has proven valuable in the past, therefore producing more content structured in the same way will continue to achieve the desired results.

Content biases can range from our beliefs about how our content strategy is working (optimism bias) to the format in which we structure our content (normalcy bias).

I looked at some of my own work through the lenses of these biases, and here are some things I discovered.

Optimism Bias

In our organization, we produce, on average, 65 pieces of content per month. This includes the following:

  • Evergreen webpages
  • Press releases
  • Blogs
  • Podcasts (which become webpages when transcripts are posted)
  • Article/feature style pieces
  • Videos
  • Infographics
  • Print pieces

I have been more studiously auditing our organization’s web content assets (4K+ specific to content marketing and roughly 30K+ to evergreen web pages). At a certain point, you have to ask yourself if this is sustainable. While we work to abide by best practices, I sometimes wonder how effectively we are abiding by those practices and what assumptions we are making about the success of our various content formats.

We also have multiple content creators spread across multiple teams. The standards they employ and consider normal and not always what I think of first as normal! My content is always biased towards either tracing the efficacy of content towards ROI, meaning I write and maintain a lot of content targeted to potential patients or students.

The best practices that work for the content I create do not apply to content created by other teams. My sphere of normalcy is not the same others’. This is a bias that I try to be aware of when I’m working with other content creators. But boy is it hard!!

The best practices I apply to the type of content I create don’t necessarily apply to content created by others. While we start with standard practices, we don’t necessarily end up there.

As Robert Rose recently said in a webinar I watched (through the Content Marketing Institute), while we start with standard practices, we don’t necessarily end up there. And what’s normal for us may not work for all of our stakeholders. It’s a good thing to remember.

Confirmation Bias

A good example of confirmation bias in my work is a written piece we created for our joint replacement services. Our ortho services identified this specialty as a priority, so we determined to write something targeted to potential patients to help them decide if it was the right time to get a joint replacement.

We initially assumed that patients would approach this job, of finding information about a hip or knee replacement, in the same way that our specialists (and we!) think of it as: “When to Get a Joint Replacement”.  (Can you already see the problem here?)

Fully expecting the piece to have a good traffic footprint, we pulled data for the page after six months. We were shocked to discover that it was performing abysmally! A little keyword research analysis later, and it was clear why that was happening.

People looking for information about hip and knee replacement don’t think of it as joint replacement. We separated the content out into two different pieces: “When to Get a Hip Replacement” and “When to Get A Knee Replacement”. Traffic improved 450 percent comparatively—I kid you not.

Just because we are used to regarding a content topic from a certain viewpoint—involving jargon or with an industry-focused approach—doesn’t mean our users do. A lesson we technically are intensely familiar with, and yet we still create, on occasion, a piece that is biased towards our industry and not our user.

This was a good reminder to focus time and effort around identifying not just what may help our potential audience but also to explore their approach to it and not our own.

Normalcy Bias

A few years ago, I wrote a piece about the symptoms of heart disease called “When to See a Cardiologist”. It was structured as a listicle, as that was (and still remains I believe?) a popular format to consume content.

Within a few months it became the most visited page within this subsite and even translated to clicks on the associated call to action “Schedule an Appointment.” This was undoubtedly a success, and one that we use frequently as an example for our clients of what specifically targeted content can do for our audience.

Imagine my surprise then to discover a few months ago that traffic to this piece had dropped by 25 percent! Not gonna’ lie—I experienced some panic. My team and I started to look into when and how this had occurred. We couldn’t necessarily pinpoint the exact cause, but did connect a few dots.

With the introduction of the Google answer box, which wasn’t necessarily that recent, this piece of content showed in search results in a different format to users. While best practice dictates that the heights of achievement are unlocked when your content shows in an answer box, the way our piece showed now could be considered detrimental to us, if our success measure was solely to drive traffic to our website.

Fig. 3: Google query search return from 2018 showing the content piece in an answer box with almost the entire list in an abbreviated format. Since the list is the main structure for the piece, a viewer might wonder if there is any truly pertinent information left on the page to justify a click.

I don’t know about you, but looking at that result (Fig. 3) as a user, I’m suddenly much more confident in making a decision about whether this content will help me find what I want to know. It’s also easy to assume that the 10-item list may not be detailed enough information for me.

In the future, I will be doing more research on whether a list format is the approach I want to take when creating a content piece of this type.

Content Bias in the Process of Content Creation

I hope my examples of content matching the specific cognitive biases of optimism, confirmation, and normalcy have given you some ideas of your own regarding bias. While the fake news travesty continues to make my own biased viewpoints resonate in frustration, I am making an effort to think about issues from other points of view. It sure ain’t easy!

Also, I plan on exploring more information regarding the types of cognitive bias. Understanding the biases of our audiences obviously is essential for us as content creators. It’s also important that we separate the inherent biases of our organizational need vs. our user need. I really can’t emphasize that enough. While we may think we are aware of that bias—believe me—we aren’t completely aware.

What biases can you find in your industry and more specifically your company? How do you approach them when you create content? It’s a question we are all going to have to be more honest about on reflection if we want to be successful in connecting with not just our users, but ourselves as well.

*Naturally, as a judgmental teenager and later college student, I was less aware of my own personal biases.

^While watching some continuing education videos for my project management professional certification, I listened to a fantastic presentation by Mario Alt titled “The Mission Critical Project Manager” that discussed cognitive bias.

**Wikipedia

5 Reasons Writing for Web Is Different Than Writing for Print

Cover of Writing for the Web Guide
Cover of Writing for the Web Guide

Originally published Sept 19, 2014, on Pulse, University of Utah Health Care’s intranet. Used with permission.

Every medium requires slight adjustments in writing style, tone, punctuation, formatting, and the like. The web is no different. While the current goal of web content specialists is to create content that is device (or it could be said medium) agnostic, the overall style and tone of web writing is far more personable and relaxed than has been the case for print writing. Here are five reasons why web writing is different from writing for print:

1. It’s interactive.

When we visit any page on the web, we do so with the expectation that we can leave the page at any time via hyperlinks or search if we don’t find what we’re looking for. And there, in a nutshell, is the web: we are usually searching for something. While this can be the case with printed material, the web culture demands faster results—pretty much right now.

2. Readers scan paragraphs rather than reading them.

Most readers are either searching for specific content or browsing. As such they tend to scan paragraphs for the information that most appeals to them. Usability tests have overwhelmingly confirmed that this is how we read the web.* If that’s the case, we need to alter our writing techniques to match. We need to include subtitles, catchy first lines, and highlighted areas of importance via techniques like bold text, anchor text for a hyperlink (though this will take your reader away from your page), or bulleted lists.

3. Tone and style are more informal.

There are tons of different articles and pages, even books, on the web, but the writing tone and style that overwhelmingly define it are more informal. This is in part due to how we read articles, but it’s also a product of the intimacy of the web. Web pages have varying levels of credibility due to the democracy of the web: anyone can post almost anything, and many, many pages are personal sites and posts by individuals, which are not vetted through editors or any other sort of accrediting body. This naturally leads to a lighter, more informal style.

4. TL; DR: Shorter is sweeter (most of the time).

Too long; didn’t read. Literally. Readers are turned off by articles that take way too much time to describe something that could be done in a condensed manner. For example that last sentence could have been: “Readers are turned off by articles that aren’t succinct,” or “Readers like short articles.” The general rule of thumb is to cover the subject adequately, but not over the top. Some writing styles lend themselves to the verbose, but know your audience. As a general rule, shorter is sweeter.

5. It’s never finished!

People who spend vast amounts of time on the web innately understand this. News that is updated in real time is valued more for its timeliness than for its definitive nature. This doesn’t mean that content of a more evergreen nature (or always valuable) isn’t an essential part of any site, but rather that updates, corrections, or changes are just as important to the written piece because of the way we use the web.

* http://www.nngroup.com/articles/how-users-read-on-the-web/ Yes, this article is older, but it’s that evergreen sort of content, and from a highly, highly reputable source.