1: How AI Works

What is a foundation model?

When most people think of AI, they’re actually thinking of foundation models, large neural networks trained on massive datasets to understand and generate human language, images and other content. Think of a foundation model as a highly educated generalist who has literally read the entire internet.

Foundation models like OpenAI’s GPT series, Anthropic’s Claude family and Google’s Gemini are called “foundation” because they serve as a base that can be adapted for many different tasks. They’re pre-trained on general knowledge, which is expensive and time-consuming (costing billions in compute costs), but once built, they can be used by anyone.

Training vs using a model

There’s a fundamental difference between training an AI model and using one.

Training is the process of teaching an AI model by exposing it to data. During training, the model learns patterns, relationships and knowledge. Training is generally expensive, especially for large models, and requires enormous computational power because the aim is to shape the model’s knowledge.

Using a model means sending it a prompt and receiving a response. When you ask ChatGPT a question, you’re using a pre-trained model. Your question doesn’t change what the model knows or how it works. (unless the company explicitly uses your data for retraining, which we’ll discuss shortly).

This distinction matters enormously for privacy and data governance. If you’re using a foundation model through an API, your data isn’t automatically incorporated into the model’s knowledge. It’s processed to generate a response, then (depending on the provider’s policies) either deleted or stored separately from the training data.

Data training

Consumer AI tools, including some versions of ChatGPT, may use your conversations to train future models unless you opt out. That might be fine for casual use, but not for community engagement that may involve personal or sensitive information. Enterprise-grade tools like Civio and API services generally operate differently. Your data is processed to deliver the service and is not used to train the underlying model.

Large models vs small models

The AI landscape is evolving beyond massive “one-size-fits-all” models. While Large Language Models (LLMs) have broad general knowledge, Small Language Models (SLMs) are emerging as powerful alternatives for specific use cases.

For community engagement, the future may actually lie in smaller, specialised models fine-tuned on engagement-specific data. A small model trained specifically on community feedback analysis could outperform a general-purpose LLM on that task, while being faster, cheaper and more privacy-preserving because it can run locally (even on your phone or laptop).

2: Data sovereignty and cloud infrastructure

How foundation models reach Australia

Here’s a question that confuses many people: if OpenAI’s GPT models are trained in the US, how can Australian organisations use it while keeping data in Australia? The answer lies in understanding how cloud infrastructure works and how enterprise AI tools are delivered.

Two ways to access foundation models

Consumer access: When you use ChatGPT’s free website, your data travels to OpenAI’s servers (typically in the US). It’s processed there, and the response comes back. You have limited control over where your data goes or how long it’s stored.

Enterprise access via Australian cloud regions: Major cloud providers, Amazon Web Services (AWS) and Microsoft Azure, offer foundation models in their Australian data centres. This is the key to data sovereignty.

How AWS and Azure make this work

Amazon Bedrock and Azure AI Foundry are platforms that let you access foundation models (such as Claude, GPT, or Gemini) via Australian infrastructure.

Here’s how it works:

  • Your data stays in Australia: When you send a query, it’s processed in AWS Sydney or Azure Australia, for example
  • The model is accessed via a secure connection: the Australian data centre connects to the foundation model through a private, encrypted tunnel, not the public internet
  • Processing happens locally: Your data is never sent overseas. The model’s “intelligence” is accessed, but your actual data remains in the Australian jurisdiction
  • Responses return to Australia: The AI’s response is generated and delivered within the Australian data centre

Flowchart illustrating data flow within Australian jurisdiction, including community interaction and database storage.

Why this matters for compliance

Data sovereignty means your data remains subject to Australian law, including the Privacy Act. If data is processed overseas, it may be subject to foreign laws, which could compel disclosure to foreign governments. By using foundation models within Australian regions, you get:

✅ Data stays in Australian jurisdiction

✅ Subject to the Australian Privacy Act

✅ No training on your data

✅ Clear data governance and retention policies

The architecture in practice

Let’s use an LLM hosted on Microsoft Azure as an example:

This is very different from pasting feedback into ChatGPT’s website, where data travels to US servers and you have no control over retention or use.

3: The building blocks

Teaching machines to understand context

One of the most powerful concepts in modern AI is the embedding. An embedding is a way of representing text (or images or other data) as a list of numbers called a vector, which captures its meaning. Words or sentences with similar meanings have similar vectors.

For example, the sentences “I’m worried about traffic congestion” and “Increased vehicle flow is a concern” would have very similar embeddings, even though they use different words. This allows AI systems to understand that two pieces of feedback are talking about the same issue, even when community members express it differently.

Embedding models are specialised AI systems that create these numerical representations. They power semantic search, which searches based on meaning rather than exact keywords. In a traditional keyword search, you would only find feedback containing the exact word “traffic.”

With semantic search using embeddings, you can also find feedback about transport, congestion, or commuting, even if those specific words are not used.

Embeddings are the building blocks that make Retrieval-Augmented Generation possible.

Retrieval-Augmented Generation (RAG)

Now that we understand embeddings, we can look at how they are used in a system called Retrieval-Augmented Generation, or RAG.

When you ask an LLM a question, it uses patterns it learned during training to predict the most likely answer. That training covers a huge amount of information, but it does not include your specific documents or the latest policy updates. As a result, the model can sometimes “hallucinate”, confidently generating information that sounds right but is completely false.

RAG helps reduce this problem by giving the model a way to look things up before answering. It combines two steps:

Retrieval: When you ask a question, the system first searches a specific, reliable knowledge base such as your engagement reports, project documentation or policy library. It uses semantic search powered by embeddings to find the most relevant pieces of information.

Generation: The AI then passes those retrieved documents to the language model, which writes its answer using that context. It is not pulling from general internet knowledge. It is building its response based on your data.

By separating fact-finding from language generation, RAG helps keep answers grounded in real information. It makes the model behave more like a well-informed assistant, looking up facts before speaking.

Controlled knowledge, trusted results.

RAG works best in controlled or siloed environments where you decide exactly what the AI can and cannot access. This means the system retrieves answers only from your approved data sources, not from the public internet.

For example, a council engagement team could build a RAG-powered tool that searches only its engagement data and policy documentation. The AI can then generate summaries or answer questions, knowing that every statement comes from verified internal information.

This setup also supports strong privacy and data governance. You control what goes into the knowledge base, how often it is updated, and who has access. That level of control makes RAG-based systems well-suited for public-sector use, where accuracy, transparency and trust are essential.

It is not magic, but it is a big improvement.

RAG does not eliminate hallucinations completely, but it makes them far less likely. Think of it as giving the AI a carefully curated library to check before answering. If that library is accurate and complete, the answers will usually be strong.

If they are consistently not, then consider your knowledge; remember garbage in, garbage out.

In practice:

Good data = good answers
A well-managed, up-to-date knowledge base leads to reliable responses.


Bad or missing data
= weak results
If the right information is not found, the model may still make educated guesses.

Overconfidence happens
Even when unsure, a model can sound certain, so transparency and validation are important.

RAG connects everything that you’ve read so far. Embeddings help the AI understand meaning, retrieval finds the right knowledge, and the language model turns that knowledge into natural, readable text. Together, they make generative AI more accurate and explainable.

Tokens and context windows

Tokens: how AI reads text
When you send text to an AI, it doesn’t read it like a person. It breaks the text into small chunks called tokens. A token is about three-quarters of a word. For example:

“Community engagement” = 2 tokens
“I’m worried about traffic congestion” = 6 tokens

This matters for two reasons:

Cost: Most AI tools (generally via an API) charge per token processed. Analysing 5,000 survey responses (around 100 words each) might cost between $4 and $40, depending on the model used.

Context limits:  Each model has a maximum number of tokens it can handle at once. This is called a context window or the AI’s “working memory.”

Smaller models handle around 4,000 tokens (about 3,000 words), while advanced models handle up to 200,000 tokens (about 150,000 words).

Context windows: the AI’s memory limit

The context window includes everything: your prompt, any uploaded documents and the AI’s reply. Once you reach that limit, the AI starts to forget earlier parts of the conversation or simply runs out of memory, something you may have noticed if you have ever given an AI tool too much information at once.

4: AI in Community Engagement

Now you’re an AI expert, let’s explore how it can be applied in community engagement practice.

These applications build on the concepts we’ve covered – foundation models, embeddings and RAG architecture.

When to use AI

Transforming feedback analysis

The most immediate and high-value application of AI in engagement is the analysis of qualitative data. Manually coding large numbers of survey responses is labour intensive and prone to inconsistency. AI can identify themes, categorise feedback, and even detect sentiment at scale.

What AI can do:

  • Identify and categorise themes across thousands of responses in minutes
  • Detect sentiment (positive, negative, neutral or more nuanced emotions)
  • Surface minority viewpoints that might be overlooked in manual analysis
  • Generate initial report drafts with key findings and illustrative quotes
  • Identify correlations between demographic groups and concerns

What AI cannot do:

  • Understand the full social and political context of an issue
  • Detect sarcasm, irony, or culturally specific references reliably
  • Make value judgments about which feedback is “more important”
  • Replace the need for human interpretation and strategic decision-making

Breaking down language barriers

AI-powered real-time translation can make engagement genuinely multilingual. Information systems can respond in dozens of languages, surveys can be automatically translated, and feedback submitted in any language can be analysed together.

This can potentially be transformative for multicultural communities where language has historically been a barrier to participation.

Intelligent, always-available information access

AI information systems can provide 24/7 access to project information. Community members can ask questions in natural language and receive accurate, sourced answers drawn from project documents.

Efficiency gains for practitioners

AI can automate time-consuming administrative tasks: transcribing meetings and focus groups, generating first drafts of social media posts or newsletters, summarising long documents and creating action item lists from meeting notes.

When NOT to use AI

Not every engagement task is suitable for AI. Understanding the boundaries is as important as understanding the possibilities. This is where human skills remain irreplaceable.

Deliberative and consensus-building processes

AI cannot run genuine deliberation where participants need to hear each other’s stories, build empathy and work toward shared understanding. Deliberative democracy methods like citizens’ juries, consensus conferences and participatory budgeting rely on human facilitation, emotional intelligence and the ability to navigate conflict. AI can support these processes (e.g., by summarising background materials), but it cannot replace the facilitator’s role.

Culturally sensitive or trauma-informed engagement

When engaging with communities that have experienced trauma, discrimination, or historical injustice, human empathy and cultural competence are non-negotiable. AI lacks the contextual understanding and emotional intelligence required for these situations. Engagement with First Nations communities, for example, requires respect for cultural protocols, relationship-building with Elders and understanding of historical context that AI cannot provide.

High-stakes decisions affecting rights or livelihoods

When engagement outcomes directly affect people’s rights, access to services, or livelihoods, AI should be only a decision-support tool, never the decision-maker. Human accountability, along with the ability to exercise discretion and compassion, is essential.

Quick decision framework

✅ Good fit for AI:

  • Analysing 20+ survey responses or submissions
  • Categorising and coding large volumes of feedback
  • Transcribing meetings and focus groups
  • Translating content into multiple languages
  • Providing advice on survey structure
  • Providing 24/7 access to project information
  • Summarising long technical documents
  • Identifying patterns and correlations in data
  • Drafting initial reports or social media content

❌ Poor fit for AI:

  • Building relationships with community leaders
  • Running citizens’ juries or deliberative workshops
  • Engaging with First Nations communities
  • Navigating conflict or power dynamics
  • Determining who is a “legitimate” stakeholder
  • Trauma-informed or culturally sensitive engagement
  • Small-scale engagement with known stakeholders

See why agencies are moving to Civio

Discover the platform helping government agencies achieve stronger engagement, with tools that save time and simplify administration.

Two women engaged in a discussion while working at a desk with a computer in an office setting.

5: Understanding AI Bias

Knowing what AI can do is only half the story. To use AI responsibly, you must also understand its risks and limitations. This section covers the two biggest concerns we’ve heard in community engagement: bias and privacy.

The problem of AI bias

AI systems learn from data, and if that data reflects historical discrimination or under-represents certain groups, the AI will perpetuate and even amplify those biases. Understanding where bias enters is critical to ensuring fair and representative outcomes.

Data bias occurs when training data is unrepresentative. If an AI is trained primarily on feedback from affluent, English-speaking communities, it may struggle to accurately interpret input from culturally diverse or lower-income populations. It may misclassify their concerns or fail to recognise important themes.

Algorithmic bias arises from how the AI is designed. Even with balanced data, the algorithm’s structure, what features it prioritises and how it weighs different factors can introduce bias.

Human bias enters through the people who build, deploy and interpret AI systems. Our own assumptions and blind spots shape everything from what data we collect to how we frame prompts to how we interpret results.

In community engagement, bias could manifest as AI tools that systematically misinterpret feedback from certain demographic groups, fail to recognise culturally specific concerns, or prioritise issues raised by dominant groups over marginalised voices.

Mitigating bias: a multi-layered approach

Addressing bias requires action at every stage:

Pre-processing: Ensure input data is representative and balanced. Actively seek out diverse voices and perspectives. If you’re analysing community feedback, check that your engagement process itself reached diverse groups.

Algorithmic fairness: Use AI tools designed with fairness constraints. Test for disparate outcomes across demographic groups. Ask vendors: “Have you tested this AI for bias? Can you share the methodology and results?”

Post-processing: Audit AI outputs for bias. Use human review to catch and correct biased results before they inform decisions. Compare AI findings against your own reading of a sample of feedback.

Transparency and contestability: Make it clear when AI has been used in analysis. Provide mechanisms for community members to challenge AI-assisted findings.

Human oversight: Never allow AI to make final decisions. AI should inform, not replace, human judgment. You remain accountable for the engagement outcomes.

6: Privacy and AI in engagement

Privacy and AI in Engagement

When you use AI in engagement, privacy needs to be considered from the start. The Australian Privacy Act sets the rules for how organisations handle personal information. The Office of the Australian Information Commissioner (OAIC) has confirmed that these same rules apply to AI systems.

It helps to think about two moments:

  • When you are using an AI tool, such as summarising feedback or generating reports.
  • When someone is training AI on data that may include personal information.

That distinction matters because AI can both collect and create personal information, sometimes without you realising it.

APP 1 – Open and transparent management

Your privacy policy must clearly explain how you manage personal information, including through AI systems. From 10 December 2026, if your AI system makes decisions that could significantly affect someone’s rights or interests, you must disclose:

  • The kinds of personal information your AI uses
  • The kinds of decisions made by the AI
  • Whether the AI makes decisions on its own or assists human decision-makers

This applies even to seemingly simple systems, such as AI that prioritises which feedback gets reviewed first or flags submissions for further attention.

Example: If your engagement platform uses AI to categorise submissions for sentiment, you must disclose this in your privacy policy before the December 2026 deadline.

APP 3 – Collecting information

If your AI tool analyses community feedback and infers details about individuals, such as location, cultural background or sentiment, that counts as collecting personal information under the Privacy Act.

You can only collect what is reasonably necessary for your functions or activities, and it must be done fairly and lawfully. If the information is sensitive, such as health, ethnicity or political views, you will usually need explicit consent.

Importantly, information that AI generates or infers about a person, even if incorrect (sometimes called “hallucinations”), is still considered personal information if it relates to an identifiable individual.

Example: If you upload community submissions to an AI system to summarise themes, you are collecting information through AI.

APP 5 – Being transparent

People need to know when AI is part of your engagement process and how their information will be used. Your privacy notice or engagement page should clearly explain:

  • That AI systems are being used to support analysis or reporting
  • Why they are being used, such as grouping feedback or identifying common themes
  • What types of personal information the AI will process
  • Whether any third-party AI services or overseas systems are involved (see also APP 8)
  • How people can access, correct or complain about their information
  • That AI systems may generate inaccurate information, and how you address this

Example: If an online survey uses AI to detect topics or sentiment, include a short statement telling participants how that works, that responses will be de-identified before analysis, and that they should not include sensitive personal information in their submissions.

APP 6 – Using information appropriately

Information collected for one purpose cannot automatically be reused for another. If feedback was gathered for a specific project, you cannot later use it to train an AI model unless participants were clearly told about that possibility from the start or you obtain fresh consent.

This is particularly important because many AI training processes involve large-scale data collection that may not align with the original purpose.

Example: If your organisation collected submissions for a housing consultation, you cannot reuse that data to train a new language model or improve AI capabilities without new consent from participants, even if you de-identify it first.

APP 8 – Cross-border disclosure

If your AI system sends personal information overseas, such as to cloud-based AI services operated by international providers, you must take reasonable steps to ensure the overseas recipient handles the information in a way consistent with the Australian Privacy Principles.

Example: If you use a US-based AI analysis platform, you must ensure the provider has adequate privacy protections. Consider whether data stays in Australia, whether servers are located overseas, and what safeguards are in place.

APP 10 – Accuracy

You must take reasonable steps to ensure personal information is accurate, up to date, complete and relevant. This includes information generated or inferred by AI systems.

AI systems can produce outputs that appear credible but are inaccurate or outdated. You must have processes to verify AI-generated information before relying on it, particularly for decisions that affect individuals.

Example: If AI summarises feedback and attributes views to specific participants, verify the accuracy before including it in reports. If someone requests correction of AI-generated information about them, you must respond within 30 days.

APP 11 – Keeping information secure

Privacy is not only about what you collect, but also about how you protect it. You must take reasonable steps, including technical and organisational measures, to protect personal information from misuse, interference, loss, unauthorised access, modification or disclosure.

Reasonable steps include using secure storage, restricting access to authorised staff, encrypting data in transit and at rest, regularly testing systems, and having an incident response plan in place.

Example: If you use a third-party AI analysis platform, check where its servers are located, how data is encrypted in transit and at rest, how long it is retained, and what happens if there is a data breach. Ensure you have a contract that requires the provider to handle data securely.

Building trust through compliance

Privacy law does not stop innovation. It helps build trust by ensuring people know how their information is handled, that their data is safe, and that they maintain control over their personal information.

As AI becomes more common in engagement, transparent and responsible use of these tools strengthens public confidence in government consultation processes.

7: Australian AI Ethics & Standards

Beyond privacy law, Australia has established ethical principles and voluntary safety standards for AI use. Understanding these frameworks helps you evaluate AI tools and use them responsibly.

Australia’s AI Ethics Principles (2019)

The Australian Government’s 8 voluntary AI Ethics Principles provide a values-based framework:

  • Human, societal and environmental well-being: AI systems should benefit individuals, society and the environment.
  • Human-centred values: AI systems should respect human rights, diversity, and individual autonomy.
  • Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  • Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  • Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
  • Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI and when an AI system is engaging with them.
  • Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
  • Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

While voluntary, these principles represent best practice and are increasingly referenced in regulatory guidance. For community engagement, Fairness, Transparency, Contestability, and Accountability are particularly critical.

Australia’s Voluntary AI Safety Standard (2024)

In 2024, the Australian Government released the Voluntary AI Safety Standard, which builds on the 8 Ethics Principles and provides practical, actionable guidance. The Standard consists of 10 “guardrails”:

  • Accountability and governance: Establish governance structures, an overall owner for AI use, an AI strategy, and staff training
  • Risk management: Use ongoing risk management to identify and mitigate AI-specific risks
  • Data governance and security: Protect AI systems with appropriate data governance, privacy, and cybersecurity measures
  • Testing and monitoring: Test AI systems before deployment and monitor for behaviour changes
  • Human control and oversight: Enable meaningful human oversight and intervention mechanisms
  • User disclosure: Inform end-users when they are interacting with AI or when content is AI-generated
  • Contestability: Provide processes for people to challenge AI use or outcomes
  • Supply chain transparency: Be transparent with other organisations in the AI supply chain
  • Record keeping: Maintain records to show compliance, including an AI inventory
  • Stakeholder engagement: Engage stakeholders throughout the AI lifecycle, focusing on safety, diversity, inclusion, and fairness

Why this matters: While currently voluntary, the Government is consulting on making these guardrails mandatory for high-risk AI settings, including AI used in government decision-making or public consultation. Adopting these guardrails shows due diligence and positions you ahead of likely future regulation.

The Standard also specifically addresses Indigenous Data Sovereignty, requiring organisations that use data from or about First Nations communities to obtain free, prior, and informed consent and to respect the right of First Nations peoples to govern their own data.

Using retail AI tools safely

Many practitioners are tempted to use free tools like ChatGPT for work tasks. Here’s how to do so without compromising data security.

The golden rule: never input sensitive information

Do not paste community feedback, personal information, project-specific details, or confidential data into free, consumer AI tools. Remember: these tools may use your inputs for training and even if they don’t, you have limited control over data retention and security.

Safe use cases for consumer AI:

  • Drafting generic content (social media templates, general email structures)
  • Brainstorming ideas or approaches (without project-specific details)
  • Learning and skill development (asking how to do something)
  • Summarising publicly available information

For actual work involving community data, use:

  • Enterprise AI platforms with clear data policies
  • Purpose-built engagement tools with embedded AI (like Civio Engage) that are designed for the sector and provide governance
  • API-based solutions where you control data retention and usage policies

8: Making AI Work in Engagement

Meaningful engagement means everyone can participate. AI has the potential to either break down barriers or reinforce them, depending on how it’s designed and implemented. This section shows you how to ensure your use of AI makes engagement more inclusive, not less.

AI for Enhanced Accessibility

When used thoughtfully, AI can expand engagement among people who might otherwise be excluded. Here’s how:

Real-time translation breaks down language barriers. AI-powered translation can instantly convert project information and engagement materials into multiple languages.

Text-to-speech and voice input support people with visual impairments or low literacy. Text-to-speech can read out project information, while voice input allows people to provide feedback without typing.

Plain language summaries make complex information digestible. AI can generate simplified summaries of technical documents, making them accessible to people without specialist knowledge. This is particularly useful for planning documents, environmental impact statements and other technical materials that are often central to engagement processes.

Governance and Accountability Frameworks

Using AI in government requires governance. That might sound bureaucratic, but it’s actually straightforward: governance just means having clear roles, responsibilities and processes to ensure AI is used responsibly. You don’t need a complex framework; you need a practical one.

A Simple Governance Model

For most engagement projects, governance can be as simple as this

Role Responsibility
Project Lead Overall accountability for AI use in the engagement project. Ensures AI use aligns with project goals and ethical principles.
Engagement Team Day-to-day use of the AI tool. Reviews AI-generated outputs and ensures they’re accurate and fair.
Data Steward Ensures the data used to inform the AI is accurate, representative and managed according to privacy principles.
IT/Digital Team Provides technical support. Ensures the AI tool is secure and properly integrated with other systems if required.

This isn’t about creating new positions or committees. It’s about being clear on who’s responsible for what. In a small team, one person might wear multiple hats. That’s fine, as long as the responsibilities are clear.

Risk Management and Audit Trails

Before you implement an AI tool, do a simple risk assessment. What could go wrong? Could the AI produce biased outputs? Could there be a privacy breach? Could the AI generate inaccurate information?

For each risk, think about how you’ll mitigate it. This doesn’t need to be a 50-page document. A simple table will do:

Risk Mitigation
Biased analysis of feedback Review AI outputs for balance; compare with manual analysis of sample
Privacy breach Use tools with Australian data sovereignty; ensure data is encrypted
Inaccurate information Use RAG architecture; have staff review AI-generated content before publication

The other key element is maintaining an audit trail. Keep a record of how AI has been used to inform decisions. This could be as simple as saving the AI-generated summaries you provide to decision-makers, along with a note of how they were used. This creates transparency and accountability.

The Australian Government’s National Framework for the Assurance of AI in Government sets out five cornerstones: governance, data governance, risk-based approach, standards and procurement. At a project level, you don’t need to implement all of this in detail. But these principles should guide your thinking.

Advanced and Emerging Issues

AI is evolving fast. While you don’t need to be across every development, there are a few emerging issues worth keeping on your radar as a practitioner.

Generative misinformation and deepfakes are a growing concern. The ability to create realistic but fake images, videos and audio (known as deepfakes) poses a real threat to public trust. Imagine a deepfake video of a council CEO making statements about a controversial project that they never actually made. It sounds far-fetched, but the technology is already here and getting better.

What can you do? Be aware of the risk. Have a plan for responding if misinformation emerges during an engagement process. Be ready to verify and debunk false information quickly and authoritatively. And be transparent in your own communications so that the community knows what official channels to trust.

Synthetic data is artificially generated data that mimics real-world data. It can be used to test AI models without using real community data, which is a powerful way to protect privacy. This is still an emerging area, but it’s worth watching. As synthetic data becomes more sophisticated, it could allow for better testing and development of AI tools while maintaining strong privacy protections.

You don’t need to be an expert in these areas, but being aware of them will help you navigate conversations with your community and make informed decisions as new AI capabilities emerge.

Workforce Readiness

The best AI tool in the world is useless if your team doesn’t know how to use it well. Building AI literacy within your engagement team isn’t about turning everyone into data scientists; it’s about developing the skills and confidence to use AI effectively and critically.

Core Skills for the AI-Ready Practitioner

Three skills will serve you best as AI becomes part of engagement work: critical thinking, prompt crafting and ethical scrutiny.

Critical Thinking

The most important skill is critical thinking, knowing when to question the machine. Ask yourself:

  • Does this summary genuinely represent what people said?
  • Is sentiment being distorted by a small but loud group?
  • Does the interpretation fit what I already know about this community?

Remember: AI is a tool, not an oracle. As the Oracle tells Neo, “What’s really going to bake your noodle later on is, would you still have broken it if I hadn’t said anything?” In other words, interpretation still belongs to you — not the machine.

Prompt Crafting

Good results depend on good prompts. The clearer and more specific your instructions, the better the output. Provide context, describe the project, audience, tone and what you need the AI to do.

Experiment with phrasing and review the differences. Over time, you’ll develop a feel for how to shape responses that are useful, accurate, and on-brand.

Building Team Confidence

AI can generate both excitement and hesitation. To build confidence across your team:

  • Start small: Pilot AI on low-risk projects to build familiarity before full rollout.
  • Explain the why: Show how AI saves time on repetitive tasks, allowing staff to focus on relationships and strategy.
  • Train by doing: Use real examples and data, not theory. Let people experiment and learn by trial.
  • Share successes: Celebrate wins, faster analysis, new insights, or improved communication to build momentum.

Feedback Loop

  • Project debriefs: After each project, discuss what worked, what didn’t, and what to improve.
  • Community feedback: Ask people about their experience with any AI features they used.
  • Iterate: Refine prompts, training, or communication based on lessons learned.

Treat AI as a continuing practice, not a one-off initiative. The more you review, adapt, and improve, the more value AI will bring to your engagement work.

Plans designed around you

Explore our plans and request pricing. We’ll send you a clear proposal, no sales calls, just the details you need.

A woman with curly hair and headphones looking at a tablet in a brightly lit corridor.

How Civio uses these principles

At Civio, we recognise that community engagement depends on maintaining public confidence. Our architecture reflects the principles outlined in this guide.

Australian data sovereignty

All data, including AI processing, occurs exclusively within Australia on AWS and Azure infrastructure. Your community’s information never leaves Australian jurisdiction and remains subject to Australian privacy law.

We access foundation models through private, secure connections – not the public internet – ensuring data is never exposed to external parties or used to train public models. This addresses the “training vs using” distinction: we use foundation models, but your data is never used for training.

Privacy by design

Before any data reaches an AI model, our systems automatically detect and remove Personally Identifiable Information (PII). This “sanitisation” step protects community privacy at the architectural level, addressing APP 3 and APP 11 requirements.

We design our analysis tools to focus on the content and themes of feedback, not the identity of individuals. User interactions with our AI-powered information system are anonymised, and conversation data is never used to train or improve the underlying foundation models.

Accuracy through RAG

Our information system and analysis features use Retrieval-Augmented Generation (RAG) architecture. For example when the system first searches your project’s approved knowledge base – documents, FAQs, reports you’ve uploaded – using semantic search powered by embeddings.

It then generates a response grounded in that specific, verified information. This dramatically reduces “hallucinations” and ensures responses are accurate and relevant to your project.

Human oversight and contestability

Our tools include clear mechanisms for human review of AI-generated outputs. When community members interact with our AI-powered system, it’s clearly identified as AI, and there are pathways to escalate to human staff. We build in feedback mechanisms (like thumbs up/down on responses) that allow both practitioners and community members to flag issues, providing real-time quality control and addressing the Contestability principle.

Rigorous testing

Before deploying any AI feature, we conduct extensive testing: functional testing to ensure accuracy, risk scenario testing to identify potential failures, and adversarial testing (including “jailbreak” attempts) to ensure the AI can’t be manipulated into inappropriate behaviour. We test with diverse datasets to identify potential bias.

Alignment with Australian standards

Our approach aligns with Australia’s 8 AI Ethics Principles and the 10 Guardrails from the Voluntary AI Safety Standard. We maintain governance structures, use ongoing risk management, ensure data governance and security, and maintain records for compliance. For First Nations engagement, we respect Indigenous Data Sovereignty requirements.

Conclusion: the path forward

AI is not a replacement for human community engagement. It cannot build trust, navigate complex power dynamics or make ethical judgments. What it can do, when used responsibly, is manage the data-heavy, repetitive work that takes time away from practitioners. Used well, it frees us to focus on the strategic and relational parts of engagement that truly matter.

The key is to approach AI with informed scepticism. You now hopefully understand how it works, where it adds value and where it does not, how bias can emerge and how to reduce it, and what Australian law and ethical frameworks require. We hope this knowledge helps you evaluate AI tools critically and apply them in ways that protect and strengthen community trust.

If we get it right, AI will help us listen more carefully, reach more people and understand more deeply. If we get it wrong, it risks eroding the trust that sits at the heart of our work.

The choice is ours — and Civio is here to help you make it wisely.

Thanks for reading, have a great conference.

Glossary

AI (Artificial Intelligence): Computer systems that can perform tasks that typically require human intelligence, such as understanding language, recognising patterns, and making decisions.

Algorithm: A set of rules or instructions that a computer follows to solve a problem or complete a task.

Algorithmic bias: Systematic and repeatable errors in AI systems that create unfair outcomes, often disadvantaging certain groups.

API (Application Programming Interface): A way for different software systems to communicate with each other. Enterprise AI tools often provide API access for secure integration.

APP (Australian Privacy Principles): Thirteen principles set out in the Privacy Act 1988 that govern how Australian organisations must handle, use and manage personal information. They cover collection, use, disclosure, data quality, security and access to personal information.

Bias mitigation: Techniques used to reduce or eliminate bias in AI systems, including diverse training data, algorithmic fairness constraints, and human oversight.

Cloud infrastructure: Remote servers and computing resources accessed over the internet, rather than local servers. Major providers like AWS and Azure operate data centres in multiple countries.

Context window: The maximum amount of text (measured in tokens) that an AI model can process at once, including your input and its response. Think of it as the AI’s “working memory.”

Data sovereignty: The principle that data is subject to the laws and governance of the country where it is physically stored. For Australian organisations, this typically means keeping data within Australian data centres.

Deepfake: Realistic but fake media (images, videos, or audio) created using AI, often used to misrepresent what someone said or did.

Embedding: A numerical representation (vector) of text, images, or other data that captures its meaning, allowing AI to understand similarity and relationships.

Encryption: The process of converting data into a coded format to prevent unauthorised access, ensuring information remains secure during storage and transmission.

Enterprise AI: AI tools designed for business use with strong data governance, security, and compliance features. Typically includes contractual guarantees about data usage.

Fine-tuning: The process of taking a pre-trained foundation model and further training it on specific data to specialise it for a particular task or domain.

Foundation model: A large AI model trained on massive amounts of data that can be adapted for many different tasks (e.g., GPT-4, Claude, Gemini).

Generative AI: AI systems that can create new content (text, images, audio) based on patterns learned from training data.

Hallucination: When an AI confidently generates information that is false or not grounded in its training data or provided context.

Inference: The process of an AI model using what it already knows to generate an answer, prediction or summary. It is what happens when you ask an AI a question and it produces a response based on patterns it learned during training.

Jailbreak: An attempt to bypass an AI system’s safety constraints or guidelines to make it produce outputs it was designed to refuse or avoid.

Large Language Model (LLM): A type of foundation model specifically designed to understand and generate human language, typically with billions of parameters.

Machine learning: A subset of AI where systems learn patterns from data without being explicitly programmed for every scenario.

Model training: The process of teaching an AI system by exposing it to large amounts of data, allowing it to learn patterns and relationships. This is distinct from using a trained model.

Natural Language Processing (NLP): AI techniques that enable computers to understand, interpret, and generate human language.

Parameters: The internal variables in an AI model that are adjusted during training. More parameters generally mean more capacity to learn complex patterns.

PII (Personally Identifiable Information): Information that can be used to identify a specific individual, such as names, addresses, email addresses, or identification numbers.

Prompt: The input text you provide to an AI system to get a response. Effective “prompt engineering” is key to getting good results.

Prompt engineering: The practice of crafting effective prompts to get desired outputs from AI systems.

RAG (Retrieval-Augmented Generation): An AI architecture that combines searching a specific knowledge base (retrieval) with generating responses based on that information (generation), reducing hallucinations.

Semantic search: Search based on the meaning of words and context, not just keyword matching. Powered by embeddings.

Sentiment analysis: AI technique to identify and categorise emotions or opinions expressed in text (e.g., positive, negative, neutral).

Small Language Model (SLM): A more compact AI model with fewer parameters than LLMs, designed for specific tasks. Often faster, cheaper, and more privacy-preserving.

Synthetic data: Artificially generated data that mimics real-world data patterns without containing actual personal information, used for testing and development while protecting privacy.

Token: A unit of text that AI models process (roughly 3/4 of a word). Many AI services charge based on tokens processed.

Training data: The data used to teach an AI model. The quality, diversity, and representativeness of training data significantly impacts the model’s performance and potential biases.

Vector: A numerical representation (list of numbers) that captures the meaning or characteristics of data. Embeddings are a type of vector used to represent text.

Vector database: A specialised database that stores embeddings and enables fast semantic search.

See why agencies are moving to Civio

Discover the platform helping government agencies achieve stronger engagement, with tools that save time and simplify administration.

Two women engaged in a discussion while working at a desk with a computer in an office setting.