<< Back to Blog

Establishing Necessary Levels of Trust Before You Can Use AI for Data Analysis

Image of Reporting Xpress Blogger
Reporting Xpress Blogger

The Importance of Trust

One of the most powerful uses of generative AI in nonprofits involves enhancing your ability to leverage the massive amounts of constituent data well-run nonprofits tend to collect over time. This can enable you to connect with repeat and potential donors with the most appropriate messaging and spot patterns and markers to identify potential “look-alikes” to your most consistent donors.     

However, before taking advantage of any AI-powered software to pursue your goals, you need to establish a few levels of trust. 

The first level of trust involves your raw data.  You need to understand who you are sharing your data with, and what rights you are giving them.   

Let’s break that down a bit more:   

It would not be hard for bad actors to create “honey pot” websites that advertise the ability to analyze your data using AI for a low cost or free. Many “talk to your spreadsheet” oriented sites from who knows where are out on the Internet already. I am not saying any, or all are bad, but they can post any kind of privacy policy required to attract users and then just ignore those policies and sell your data. It is a scary thought and something that anyone in your organization could conceivably fall for. 

Navigating Privacy Risks with Major AI Platforms

But, it is not just the fringe websites you should be concerned with.  All of the major AI companies have been quietly changing their privacy policies to allow them to use your data in various ways.  With many types of data, this is no big deal because the data shared is either not sensitive or not identifiable.   

Donor data, which is what nonprofits are likely to want to use AI for, is different.  You don’t want your donor demographic data or their history of transactions and engagements with your organization becoming part of some AI model’s knowledge base and that information “leaking out” in chat responses somewhere down the road.   

It is not all bad news. There are reputable AI-powered data analysis software options that don’t expose your data. One option is to utilize a privately hosted model that does not learn or share information with the “mothership” models that learn and evolve over time. The Meta Llama models are open source, so organizations can use these models in private mode by establishing them in a private data center and control the flow of information.   

There are also options like Xpress Analytics from Reporting Xpress that utilize proprietary methods to optimize and describe data structures accurately enough for the foundational AI models to write queries without needing to see any underlying data. Those queries can then be executed against your data in a private environment to deliver results without ever exposing your data to those models. 

Ensuring Transparency in AI-Driven Insights

Another important layer of trust that must be established between you and your AI-powered data analysis platform mimics the layer of trust that you establish with your human data analysts.  In this case, you need to establish that you trust the answers they are giving you. 

If a human data analyst hands you a new report or analysis they have just created, the first thing you're most likely going to want to do is ask them how they came up with their answer.  They would typically then explain the logic they used to filter, organize, and calculate their results, and then you could determine whether the two of you were on the same wavelength. 

AI-powered data analysis software like Xpress Analytics allows you to see the actual queries that were written and will even translate those queries back into very readable business logic so you can verify that you are getting the end product you asked for. The experience is very similar to dealing with a human.  You can find similar capabilities in the Snowflake environment. 

One of the niceties of dealing with an AI-powered data analyst versus a human is the speed at which answers come back to the requestor.  In most cases, it takes about 10-20 seconds for an AI data analyst to interpret your question, write code, and have that code executed against your data so that you can see the results.  No human can match that.   

In some cases, a less data-aware person will still need a human to help with complicated or very deep questions, and where more knowledge of the underlying data structure is required. In these cases, the AI-powered data analyst can become your human expert’s assistant, helping them complete the task exponentially faster than on their own. 

The ability of AI-powered data analysts to transform how nonprofits can analyze and use their data to elevate fundraising performance is obvious, but establishing these layers of trust is a prerequisite to safely and confidently incorporating this technology into your everyday workflows.   

Leave a Comment