Before you use AI to write grants, make sure you know these 4 things.

AI writing is the wave of the future . . .  right? Surely it can write grants…right?

Lately, it seems I can hardly open a newsletter, attend a training, or look at a conference schedule without seeing the word AI popping up everywhere.

Since AI writing has become more accessible and received a lot of very positive press, you may be wondering if AI can write grants for you.

Alex, the founder of Millionaire Grant Lady and Associates, has been working in the nonprofit and grant space for more than a decade. In her work, she has fulfilled many different fundraising roles and has served as a grant reviewer to determine who received grant funding. Her knowledge of every side of the grant-winning game positions her and her team to assess proposals for effectiveness and fundability. Read this article to see more Millionaire Grant Lady wins from last year.

With this expertise in mind, we have tested many AI platforms, attended training on AI, and assessed the writing AI has produced for us.

If you are considering using AI to write grants, make sure you know these things:

  1. How do AI writing platforms work?
  2. Can you train AI?
  3. What are AI hallucinations?
  4. Can you use AI to reconfigure existing answers to meet character/word limits?

1. How do AI writing platforms work? Will they work to write grants?

To understand whether or not AI can write grants for you, you need to start with a basic understanding of how AI writers work.

AI, or Artificial Intelligence, writing platforms were created by being fed large amounts of writing. Through some computer/algorithm/coding magic, the platform digests these words in order to study how sentences are put together. After “studying” this large amount of language, the AI writing platform then uses algorithms that allow it to scan words from a prompt (provided by the user) and predict which words should come next.

For example, when I typed into an AI writing platform, “How are you?” The platform responded with an appropriate string of words related to how it is feeling. It said, “I’m an open AI platform, and I do not have feelings. But I am functioning well.”

There are three issues with this scan-and-predict function:

  1. AI has been trained on outdated information.
    The information AI platforms were trained on is often outdated. For example, one AI writing platform was trained on information that was created in 2012 or before. At the time of me writing this newsletter, 2012 was 12 years ago. When I asked this AI platform to give me current data about the rise in people experiencing homelessness, the platform responded, “I don’t have access to real-time data or the ability to browse the internet.”

    Because the content the AI platform learned from is outdated, it cannot accurately discuss any current problems our communities are facing.

    This should be concerning for nonprofits because our work is tied directly to the real, current problems of our world.

 

  1. AI may have been trained with inaccurate or biased sources.
    When AI platforms are fed information for training, this information comes from a wide range of sources, many of them on the internet. While AI creators have some quality control measures for this information, we know from experience that much of what we read on the internet is inaccurate, biased, or outdated.

    AI has no ethical basis for considering the writing it scans or for considering the writing it creates. It has no understanding of what is a fact, what is a misrepresentation, or what is a lie. It creates writing that is modeled after other patterns of writing. These writings may contain bias that the AI platform perpetuates through its predictive writing function. If you cannot vet the sources it is pulling from, do you feel comfortable allowing it to write for you?
  2. AI only creates text based on predictions, not based on understanding or context.
    AI creates writing similar to how predictive text messages create writing, by scanning what has already been written and predicting what words should come next.

    Sometimes, predictive text is correct and helpful; other times, it creates an absurd string of words that has no meaning in context.

Try it. Open a text message, type a couple of words, and then select the middle word it predicts about 15 times. Read the message. Would you write that? Would you feel confident enough in that message to send it? Would you send that message to the recipient if thousands of dollars were on the line?

Can you train AI to make it better?

When you access an AI writing platform, many bring you to a chat page where you can craft new requests for AI. Within this page, you can add a lot of very specific information that you want AI to use to generate a response. AI will then use the information you have given it to create writing based on your writing. This is sometimes considered training the AI.

So does it work? If you give AI a lot of relevant information about your organization, will AI actually create a good grant proposal? Based on my testing… not really. Why?

  1. No matter how much training you provide, AI can still make inaccurate predictions.
    If we think back to how AI writing platforms work, as they receive more writing, they scan that writing to determine what words should come next. The more writing they are fed, the more accurate their predictions will be… at least in theory.

    But if we consider the nature of predictions in other fields.

  • Is the weather report right? Sometimes.
  • Is predictive text messaging right? Sometimes.
  • Is AI writing right? Sometimes.
  • Also, when I was in school, by the year 2020, we were all supposed to be living on Mars, but here we are firmly on the earth.

Each input of data into any of these systems is supposed to make the predictions better, but still, we see that predictions are right only sometimes.

When I tested my phone’s predictive features, I typed “How are” and then selected the next predicted word 15 times. The final text was asking a friend about how I could fix the computer at work so he doesn’t have to get a new one.

While that writing may look good on the surface, when we consider the context that only a human can know, this text is absurd. First, I can’t fix a computer, so I’m not sure how my phone would have decided from past texts that I was going to say that this time. Second, this particular person works in construction with not a computer in sight. So this text, even though the writing makes sense, it isn’t real. It doesn’t work.

Ultimately, the trainable AI platforms that we have tested create similar big errors in accuracy. The writing is fluent, it makes sense, but in context, it is nonsense.

  1. AI cannot comprehend and assess writing for meaning, only for patterns.
    AI is artificial AI seems intelligent because of how much it can do and how quickly it can do it. But no matter what AI produces, AI does not really understand, it just gives you words based on predictive patterns that make it seem like it understands.

    Consider a calculator. If you input an equation, it will give you an answer. If you input the equation correctly, it will even be the right answer. But if you ask a calculator to tell you what that answer means, it can’t do that. It can neither understand the context for the calculation nor can it tell you the effect of the answer. It just gives you a number. AI writers just give you words.

    AI cannot distinguish what is or is not important; AI does not know what makes you unique. In a competitive grant application, your organization has to find a way to rise above other organizations applying for the same grant. You have to be able to distinguish what makes your organization unique.

Are you interested in writing that doesn’t just look good but also that highlight’s your organization’s unique selling points to multiply your grant funding? We are here to help.

What are AI hallucinations?

AI does not write from context, with a conscience, or with consciousness. It is just predicting. These predictions can lead to what is called an AI hallucination.

An AI hallucination is when the AI writing platform writes about things that don’t exist or are absurd. For example, my above predictive text to my friend about his work computer which he doesn’t have. The words made sense, but it is absurd because it doesn’t fit the context.

Another AI hallucination I came across recently was in a product description for a broom. The broom was described as having “cutting-edge bristle technology.” This is an AI hallucination because while many products sell better when they are labeled as being cutting-edge, this does not apply to a broom. A broom is just a broom, and this particular broom had nothing cutting edge about it.

At their best, AI hallucinations make us smile because the error is humorous. These kinds of AI hallucinations are probably pretty harmless in the world. However, I don’t think any nonprofit wants a funder to think of them as being humorous because they wrote something absurd.

At their worst, though, these hallucinations can produce outputs that reflect the inherent biases of the training the AI received, perpetuating harm to the very communities many nonprofits are working hard to help.

Can you use AI to reconfigure existing answers to meet character limits to write grants?

If you have been writing grants for long, you know that most foundations are generally looking for information about your organization and its history, the need for your program, specific activities you will do within the program, and the metrics you will track to know you have met your program goals.

Where foundations should standardize, though, is how long they want these responses to be. Each day, our team of experts takes content and transforms it into answers that are 1,000 characters long, then 3,000 characters, then 200 words, then 500 words, and so on.

Many AI platforms promise to be able to do this task for you, and frankly we were really hopeful that AI would be able to do this well. I mean it seems an easy enough task, right? If I give AI good content, it should be able to cut out a few words, right?

However, we have yet to find an AI writing platform that actually can shorten or lengthen an answer to the target amount, while answering the specific question well.

Why does AI fail at this? Well, remember, AI is only predicting the next word based on its large-language model training.

  1. AI does not know what makes your organization unique.
    What a skilled grant writer needs to do is to cut out fluff words to hone in on the essential information while also highlighting the unique attributes of the organization. Check out this article for more tips on how to write an effective grant.

    AI cannot understand what information is essential because its predictive algorithms do not place more importance on one word over another. In my tests, I found AI was just as likely to cut out the important information as it was unimportant information.

  2. AI does not actually meet the character or word limit, even when you explicitly train it.
    In my testing, AI only sort of hit the character target. When asking for a response that was 1,000 characters long, AI was just as likely to produce a response of 400 characters as 2,500, and frankly, neither of these was helpful to me.
  3. AI cannot understand the nuances of a question.
    Many foundations will ask the same basic question but with their unique spin. In my testing, when giving AI two or three similar but very different prompts, AI produced the same answer for each. Further, the AI answers were very vague and lacked the specific details a skilled grant writer could have included. Without specific details that actually answer the foundation’s question, your proposal is likely to hit the trash can.
  4. AI did not save me time.
    In my testing, I tried to see if I could use the AI writing as the starting foundation for my own. Unfortunately, I found that it took me just as long to edit the writing that AI produced as it would have taken me to write it from scratch in the first place.

    Are you looking to AI to save you time? At Millionaire Grant Lady and Associates, our clients reduce the time they spend on grants by 80% and receive a highly competitive grant product. Contact us to learn more about how we can write grants for you.

Conclusion:

We have all experienced the internet chatbots that make us want to scream to talk to a human. While these bots are human-like, they often fail to actually answer the questions we are asking.

In my testing of various AI writing platforms, my experience was the same. The writing sounds human-like, but it did not really answer the application questions. Further, AI writers failed to highlight what made the organization unique or to explain how the organization was addressing a current problem. The writing fell flat. If this writing were submitted, it would ultimately impact the fundability of the proposal. And ultimately, me and my team of writers could do much better.

Will AI writers always be inadequate? Who knows. Technology advances at a breathtaking pace, so who knows where AI will be in a year or a decade. But as of today, the AI writers we have tested cannot craft a compelling proposal that I would feel comfortable submitting to a foundation. To me, AI cannot write grants that win.

Are you turning to AI because you need help writing proposals?

We can help. At Millionaire Grant Lady and Associates, we work with organizations of every shape and size to get them more grant funding. Our services range from total grant management to customized templates that your grant writing team can use as a guide to create your own compelling grant proposals. Contact us today to see how we can partner with you to raise more money to fund your mission-driven work. Have us write grants that win for you.