By the end of the two days, participants will:
• Understand what AI is, how large language models work, and what they can and cannot do in
government work
• Know the major AI tools available (ChatGPT, Claude, Copilot, Gemini, Perplexity) and when to use
each
• Be able to use AI safely and responsibly, with awareness of EU AI Act obligations and data protection
• Have hands-on experience with document drafting, summarization, research, data analysis, image
and video generation, and AI assistants
• Recognize deepfakes and AI-generated media risks
• Have a personal action plan for using AI in their daily work
• Understand what AI is, how large language models work, and what they can and cannot do in
government work
• Know the major AI tools available (ChatGPT, Claude, Copilot, Gemini, Perplexity) and when to use
each
• Be able to use AI safely and responsibly, with awareness of EU AI Act obligations and data protection
• Have hands-on experience with document drafting, summarization, research, data analysis, image
and video generation, and AI assistants
• Recognize deepfakes and AI-generated media risks
• Have a personal action plan for using AI in their daily work
• Welcome and trainer introduction
• Round of introductions: participants share their role, institution, and what they want to take away
• Mapping the room: current AI experience level, tools already used, biggest challenges
• Setting expectations and seminar structure
• Article 4 AI literacy requirement and what it means in practice
• Risk categories: prohibited, high-risk, limited-risk, minimal-risk
• Obligations for public institutions
• Practical implications for daily work
Lecture content:
• What AI is, how generative AI and large language models work
• Comparison of major tools: ChatGPT, Claude, Copilot, Gemini, Perplexity
• Strengths, weaknesses, and best use cases for each tool
• Free vs paid versions and what changes
• How to choose the right tool for the task
Hands-on exercise: Prompting practice. Different prompting techniques (zero-shot, few-shot, chain-of-thought, role prompting). Participants practice writing better prompts and compare outputs across tools.
Lecture content:
• Risks of using public AI tools in government work
• What data should never be entered into public AI tools
• AI hallucinations and bias – how to spot and prevent
• Transparency, accountability, and human-in-the-loop principles
• Building safe AI habits
Case study: Confidential government data uploaded to a public AI chatbot – what went wrong, what should have happened, how to prevent it. Group discussion on safe vs unsafe use cases.
Lecture content:
• Document drafting with AI: briefing notes, policy summaries, internal memos, emails
• Reviewing and editing existing documents
• Summarizing long policy documents and meeting transcripts
• Translation and language adaptation
• Quality control and verification
Demonstrations and hands-on exercise: AI summarising a long policy document. AI drafting a briefing note from rough input. Participants then practice on their own real (non-confidential) materials.
Lecture content:
• Web-enabled AI tools: ChatGPT search, Perplexity, Claude with search
• Deep research workflows for policy and market analysis
• Source verification and fact-checking AI outputs
• Comparing sources and identifying conflicts
• Building research briefs from multiple inputs
Hands-on exercise: Participants conduct a guided deep research task on a relevant policy topic, evaluate sources, and produce a structured research summary.
• Summary of Day 1 key takeaways
• Open Q&A on previous day’s content
• Participants share what they tried or thought about overnight
• Setup for Day 2 themes
Lecture content:
• Working with structured data using AI
• What data can and cannot be added to AI tools – privacy, GDPR, data protection in practice
• Preparing data for AI analysis: cleaning, anonymizing, structuring
• Generating insights from datasets
• Building simple dashboards and visualizations
• When AI helps and when it misleads
Hands-on exercise: Participants work with a sample government dataset (anonymized public data), prepare it for AI analysis, generate insights, and create a basic visualization. Includes data classification exercise: which data goes where.
Lecture content:
• AI tools for image generation: ChatGPT, Ideogram, Midjourney
• AI tools for video generation: Runway, Sora, Veo
• Avatars and voice tools for internal training and citizen communication: HeyGen, ElevenLabs
• Use cases for government communication and internal learning
• The deepfake reality – how AI-generated media can be misused
• How to recognize deepfakes – visual and audio cues, verification techniques
• What government officials need to know about AI-generated media risks
Hands-on exercise: Deepfake recognition exercise: participants review a mix of real and AI-generated images and videos, identify which is which, and discuss verification methods.
Lecture content:
• What custom AI assistants are and why they matter
• Use cases for government: policy assistant, briefing-note generator, FAQ responder, internal knowledge search
• How to design a useful AI assistant
• Note on tooling: building custom assistants requires paid licenses (ChatGPT Plus or Team, Claude Pro). Demonstration done on trainer’s account; participants follow along
Hands-on exercise: Each participant designs a custom AI assistant for a real task in their work. Participants who have paid licenses build it live; others receive templates to build it after the seminar.
Lecture content:
• Why every public institution needs an internal AI policy
• Core elements: tool approval, data classification, use cases allowed, who decides
• Data sensitivity classification: public, internal, confidential, classified – what goes where
• Light cybersecurity awareness for AI use: account security, sharing settings, incident reporting
• How to roll out AI rules without killing adoption
• Building an AI champion network inside the institution
Group activity: Participants draft the outline of an internal AI policy framework for their institution: principles, data rules, approved tools, governance structure.
• Each participant builds a personalized plan: which AI tools they will use, for which tasks, with which safety rules
• Pair discussion to refine plans
• Volunteers share their plans with the group
• Practical commitments for the first 30 days back at work
This course will take place by the stunning Lake Como, offering a breathtaking and tranquil setting for an immersive learning experience.
In addition to comprehensive training materials, the seminar fee covers the following:
a) Catering Services
b) Experiential Learning Session: Guided Tour & Culinary Exploration
c) Participant Recognition
d) Digital Materials Package
Please note: In some government institutions, security settings may block the form. If the form does not work, don’t hesitate to email us directly.
🎅 St. Nicholas Day Offer: Book 3, Pay for 2!
This holiday season, secure 3 spots for any 2025 seminar and get one completely FREE – our gift to you!
📅 Offer ends December 6, 2024.
Don’t miss this chance to save and grow with us!