Check out our latest Fleet Action!

 

Office of the Bravo Fleet Judge Advocate General

Case JAG-007

Bravo Fleet

v.

Aoife McKenzie
#2231

Trial by Judge  •  Decided April 22, 2025

Presiding Judge:

Investigator:

Defense Counsel:

Jack Conrad    

Charges

The Defendant, Aoife McKenzie, is charged with violating the following Article(s) of Conduct and Reprimand as defined by Section 6 of the Bravo Fleet Judicial Code.

  • 2x Plagiarism (Plea: Not Guilty)

    Members must not submit any plagiarized work as their own. Work of others, including content generated by software or a website, may be used in a member’s work provided that the sections of the work that are not wholly original work of the member and the sources thereof are identified and disclosed.

  • 1x False Statements (Plea: Guilty)

    No member shall make false statements with the intent to change the outcome of a Bravo Fleet proceeding.

Statement of Facts

On March 6th, 2025 the following character biography authored by Aoife McKenzie was flagged by the Bravo Fleet staff after it was submitted to the Bravo Fleet Academy for credit on an Academy course:

https://bravofleet.com/character/145231/

The post in question contained, when viewed through the source code viewer for the rich text editor on the Bravo Fleet website, a significant amount of suspect CSS code for formatting that was not added by the Bravo Fleet website. Upon examination of the CSS formatting code, it was seen to have code calling out the “gpt-4o-mini” model within the formatting, with GPT-4o mini being one of the Large Language Models used by ChatGPT. As the Creative Integrity Policy in place at the time barred the use of any “AI text generation software” in any capacity for biographies submitted to the Bravo Fleet Academy, this initiated an immediate JAG investigation. The engineering office was looped in to mass-search the Bravo Fleet website for any other submissions that had this or similar CSS code contained within them, and that investigation yielded 18 other examples. Most of these were in character biographies, ship descriptions, squadron descriptions, etc, and all but one of them were not attributed to having used ChatGPT. When comparing the CSS code output on these entries to the ChatGPT website itself, it was found that the CSS code included in the posts was almost certainly the CSS formatting code from the ChatGPT interface. An exhaustive search for an alternative source was engaged in, checking to see if other sources (Copilot, Grammarly, Google Docs, MS Word, etc) could have included this same code. No other source was found to include this formatting, not even ChatGPT content copied into an intermediary and then into the Bravo Fleet website’s text editor. It could only be duplicated by taking content output from ChatGPT and directly entering it into the Bravo Fleet website.

Given the large number of cases where this was found, it was decided by the BFCO and BFXO, under advisement from the JAG, that the best solution would be to tighten the restrictions on generative AI use and then offer an amnesty to all that admitted to AI usage in their content. This was in large part due to the fact that in almost all cases the content in question was in entries that at the time allowed for generative AI software to be used as long as it was disclosed or noted on the entry. So the offense in those cases was failure to disclose usage, not the usage itself. Additionally, even though the biography that triggered the investigation along with a story entry that the same user had created were submitted into areas that barred generative AI software entirely (a biography submitted for Academy credit and a story respectively), they would also be covered by the amnesty. Aoife McKenzie, along with several other members, were approached by the Judge Advocate General officially in regards to the posts in question that contained what was CSS formatting from the ChatGPT website in order to explain or disclose any tools that were used in the posts. Aoife McKenzie then answered that she had used Grammarly and Copilot for the posts in question. As the code in question had been proven not to originate from either of those sources, it was determined that this statement was false. Additionally, as the accused had not truthfully disclosed the usage of ChatGPT in these posts, it was determined that they were no longer eligible for the no penalty amnesty.

At this point it was decided, in consultation with the Investigator and Defender, to charge the defendant with 2 counts of Plagiarism and 1 count of False Statements. There were other entries beyond the two examples charged that were likely from ChatGPT, but it was decided to charge only the examples that specifically cited the ChatGPT Large Language Model (GPT-4o and GPT-4o mini respectively) within the entry itself. The defendant was notified of the charges that were being filed against them and what possible punishment was being sought by the Judge Advocate General. The defendant was provided sufficient counsel from the Bravo Fleet Defender and was made aware of their rights under the Judicial Code. Due to the nature of the charges, the possible penalties would not exceed those of a one-grade demotion, a Letter of Reprimand, one year probation, or a combination of the above. Therefore, pursuant to Judicial Code Section 3 – Process of Adjudication, as this was not considered a special case that required a jury trial as it was a simple determination of fact on whether or not ChatGPT was used, a Trial by Judge was to be set.

Verdict

The Presiding Judge, Max Barrick, reached the following verdict with regards to the Defendant, Aoife McKenzie.

  • On the charge of Plagiarism, the Defendant is found GUILTY on 2 counts.
  • On the charge of False Statements, the Defendant is found GUILTY on 1 count.

Judge's Opinion

This case marks the second time the question of AI generation of content has come before the Judge Advocate General’s office. It is still an issue that raises significant questions within our organization, and motivated further restrictions on the usage of AI generation software in our activity even before the case itself was prosecuted. It should be noted for the record that this case was brought and processed under the version of the Creative Integrity Policy that was in place at the time of the offenses, and not the version of the policy that is currently in place.

If there was ever a JAG case that the JAG and their staff tried to bring against a member less than was tried in this case, I have not heard of it. The usage of AI software (ChatGPT in this case) was never in doubt in this case. The defendant was offered the opportunity to admit their usage of ChatGPT in their work without penalty, and they declined to do so. Instead offering up a lie as to what had been done. We can say without question that the defendant lied at that time because, after the evidence was disclosed to them, they changed their not guilty plea on the matter of false statements to guilty, saying in effect “they got me”. As a result, they admitted that they lied about using ChatGPT in some capacity in their entry and, in effect, admitted that they used ChatGPT. The policy in place at the time barred the usage of “AI text generation software” period, not using that software to entirely write your materials for you. And this is where the defendant seems to have chosen their hill to die on. Given that they admitted to lying about using ChatGPT, the JAG office offered to drop the false statements charge and reduce the penalty to 12 months probation and removal from the Task Group Commanding Office position (the latter being essentially required, as staffers that are convicted of JAG offenses should not be in a position to lead or mentor others). The defendant refused this offer, and decided to take to the inevitable outcome.

The defendant maintained that ChatGPT was used only to make simple grammar and spelling revisions to their work, the same as what Grammarly premium might offer. The issue here, however, is that ChatGPT is not the same system as Grammarly or other grammar/spell checking software. When asked why ChatGPT should be considered “generative” even when it is being used for grammar and spell checking, ChatGPT itself says the following:

ChatGPT is fundamentally a generative AI tool—its core function is to process and generate human-like text based on input. Even when it is used for tasks like spelling and grammar correction, it engages in language generation by interpreting user input and rewriting text in a more accurate or polished form. This means that even minor corrections involve the model generating language that did not previously exist in exactly that form. Unlike traditional spell checkers or rule-based grammar tools, ChatGPT does not rely solely on predefined rules or dictionaries. It draws on a vast training dataset to make contextual judgments, rephrasing sentences, suggesting stylistic improvements, and occasionally altering tone or structure in subtle ways. These are generative actions, not mere corrections. Therefore, using ChatGPT—even for what may seem like surface-level corrections—constitutes the use of generative AI. It’s important to acknowledge this, especially in academic or professional settings where the definition and ethical use of AI tools must be transparent and clearly communicated.

When asked to make a case for why using ChatGPT in that way shouldn’t be considered “generative”, all it can really say is that “Using ChatGPT for spelling and grammar correction should not be categorized as the use of generative AI in any way that compromises academic integrity, originality, or authorship.” Note that it doesn’t say it is not the use of generative AI, it just says it isn’t generative AI that should be seen as compromising the original work. ChatGPT cannot say that using it for grammar checking is not generative because ChatGPT is a generative AI model, its functions cannot be considered otherwise. As an experiment, we also asked ChatGPT to check a piece of writing by the JAG for spelling and grammar and it made 36 unique edits and did rewrite sentences, a practice that is explicitly called out as banned within the Creative Integrity Policy. So, did the defendant use generative AI software even when checking for spelling and grammar? The answer is yes, because ChatGPT is by definition generative AI software. And, even beyond the simple letter of the law reading of the policy’s prohibition of AI generation software, ChatGPT does make changes to writing that go beyond simple grammar and spelling correction even when asked to do simply that because it just cannot help itself. There is content submitted by the accused that they themselves did not author that came from generative AI software. They are guilty of the violations charged.

Max Barrick, Presiding Judge

Defendant's Sentence

Judicial penalties are governed by Section 7 of the Bravo Fleet Judicial Code.

At this point, it comes down to how the arguments in the case impact the sentence. The defendant passed up the chance to avoid any punitive action entirely when they lied about using ChatGPT in the first place. The defendant then passed up a chance at reduced penalty after changing part of their plea part way through the process. Additionally, the defendant offered arguments in their defense that were in and of themselves demonstrably false. The entire defense rested on the idea that using ChatGPT was no different than using other grammar checking software. They stated that they went through every recommendation offered by the AI line by line and “accepted, rejected, or rewrote” based on those recommendations. ChatGPT does not offer suggested revisions to work for acceptance or rejection as systems like Grammarly do, it simply outputs the corrected work in its entirety. Additionally, the defendant could not be taking the ChatGPT output and putting it into another system that compares two versions of the same document (say, Draftable) to either manually accept or reject the edits because, as was established during the investigation, the only way for the specific CSS formatting code to appear on the Bravo Fleet website is to copy content directly from ChatGPT’s web interface and paste that content into the Bravo Fleet text field. So, the defendant factually violated the Creative Integrity Policy since it bars the use of AI generation software in any capacity in both story and academy entries. The defendant lied about using ChatGPT at all initially, and then continued to lie about how it was used during the course of the trial.

The defendant continually rejected several off ramps from this outcome that were offered by the Judge Advocate General. They lied repeatedly about what they used to assist in writing their posts and how they used it. The defendant has offered no statement of remorse or apology, instead maintaining that they are somehow being unfairly railroaded and admitting no real fault, as even pleading guilty to making False Statements means little when one continues to make the same false statements. Given the essentially undisputed fact (by both prosecution and defense) that the accused did make use of AI generation software coupled with the accused lack continued false statements during the case about their own conduct, the JAG is forced to impose the maximum outlined penalty as outlined to the accused at the time they were charged.

  • 1 Grade rank demotion to the rank of Commander
  • Removal from the position of Task Group Commanding Officer
  • General Probation of 12 months
  • 1 Letter of Reprimand

Evidence

Managed By the

Judge Advocate General's Office

This trial was conducted by the Judge Advocate General's Office. If you have questions about this trial, please contact an office staff member.