Using AI Responsibly: Key Takeaways from Lilies War



Finally, finally I’m getting around to posting about the AI roundtable discussion at the 2025 Lilies War. So here’s to what we talked about, what we questioned, and where we might go next. It was a real thoughtful conversation about how we’re using AI tools. We discussed what’s helpful, what’s questionable, and what lines shouldn’t be crossed.

The conversation circled around some core truths many of us agreed on. First, AI can be incredibly helpful on the front end of a project. Want to find sources, track down a hard-to-pronounce term, or translate something that isn’t in your usual wheelhouse? AI might just be your new favorite apprentice. It’s also great at untangling complicated texts and offering you a jumping-off point when you’re staring at a blank screen.

But—yes, there’s a but—generating content using AI? That got mixed reactions. Some folks were okay with it in moderation. Others raised eyebrows. The general feeling? Using AI to help you think is one thing. Using it to create your final project and then slap your name on it? That’s a bit murkier.

Another point that hit home is that AI can be incredibly useful on the back end. It is especially beneficial for editing and accessibility. It also helps folks with executive dysfunction. If it means more people can fully participate and express their creativity, then yes, we’re all for that.

We also talked about how complicated this context really is. AI isn’t just one tool with one function. And our community isn’t one-size-fits-all either. We come from different backgrounds, with different skill sets and levels of tech comfort. That makes coming up with a blanket rule tricky.

Some folks said they didn’t think we needed to slap an “AI was used” disclaimer on every project. They felt it wasn’t necessary until the A&S Criteria officially say so or offer clear guidance. And that’s fair. We’re still figuring out what responsible use looks like.

But one point kept coming up again and again: honor matters. In the SCA, we already have a system built around honor. We trust people to do their own work, to cite their sources, and to play fair. As one person put it, if we already have a culture of integrity, that culture can include AI use. We just need to extend it.

So, if you’re wondering how to handle AI in your next A&S project, here’s the takeaway: Know the rules. Don’t take credit for what you didn’t do. It really might be that simple.

Here’s the notes Ly. Tanneke took for me at Lilies War.

Spell check is AI, but what is AI + what is it not?
Not Star Trek, but also some really deep things, not AI—but we are expecting AI to be a person. Humans anthropomorphize everything + gut reaction – Aagh! AI weeds out little creators that are predators, can cause the problem.

Some data is garbage from Twitter when creating large lang[guage] model – can be hard to cherry pick what goes in it.
Notebook LM – can use ChatGPT to find sources that you vet yourself.
Notebook LM – only uses what you feed + explain any queries about text data – cites everything it’s saying.

Generative AI – making up new stuff
Social media posts AI generated from/g[eneral] companies
Large lang[guage] models – engine behind AI, companies feed it from everywhere.
ChatGPT tuned by OpenAI

Ethical concerns + issues – permissions?
Not a very good writer – generative AI
Non-generative AI – only uses info you give it ✘

Notion – user-friendly version of iNote →
Generative – about what it is doing


Using AI as jumping-off point to find info.
Use AI honestly, don’t use it for academic dishonesty, or unintentionally commit acad. dishonesty.

Does educational background create a barrier for artists? Maybe for KAPS champions but not pursuing laurelate (and maybe not even then)

Judges – how to educate to know to suspect AI + check.
Knowledge to use AI checker.

Baseline – given AI-generated research paper going to be given a 0

Goblin tools
AI-powered Magic 2Do? Formalizer
Magic 2Do – can break things down to component parts. Useful for people w/ exec. dysfunction

ChatGPT too polite + gives you more than you ask for

AI helping find conflicts in schedule or make typed list or scheduling


Prompts:
Do not lie
Bold any changes


AI
Useful on front end – research, finding sources, translation

Seem to have consensus to use AI as a tool
Acceptable but questionable for generating content

Back end – editing, making accessible for folks w/ disabilities, exec. dysfunction

Our context is complicated + AI is a complicated tool.

Doesn’t want to see an AI disclaimer until A&S Criteria addresses it specifically or provides guidance

In the field created a system to uphold honor
Need a similar system to uphold that honor

Know the Rules
Don’t take credit for stuff you didn’t do

More From the “What Was I Thinking?” Files

Leave a Reply