Ah, I don’t think it would work. Very coding-intensive, and we’d need to design a whole new site and maintain the domain and all the other stuff…
Also, based on the poll results, it seems like nobody actually prefers AI stories, so no one would visit that site at all, which would defeat the purpose of us building a site for them… (And yeah I think it would cause people to avoid self-reporting even more heavily, because they wouldn’t want to avoid having their stories on the main sites.)
I think that when I roll out the rule updates, I’ll try to highlight the tag filtering option, and that might help a bit.
I’ll add a small comment here which is that the recent lag is partly from the AI deluge of past months. But also we do get way more stories from new authors these days than we used to! The submissions queue is booming, lol.
I’ve been thinking for a while that I need to expand the approvers team and bring on additional approvers (probably at least another five or six, if we’re able to get enough dedicated volunteers). I think expanding the approvers team will help with queue-time a lot, and make things easier for the current team too.
I’ve just been trying to get a bunch of site stuff finalized behind the scenes before I do that…
I’ll toss my two cents in and agree with the folks in the middle of the road stance. While I personally have a very anti-generative AI stance on the whole, I simply have the ai created tag blocked on the site and live my life happily. I don’t have ai assisted blocked in case something with that tag really piques my interest, but I’m generally more likely to avoid stories with that tag than others.
My understanding of the the ai assisted tag does mirror others in that using something like Grammarly to check spelling and grammar seems normal, but using generative AI models to actually create text is where I start thinking of it as “assisted”. Then if it’s majority-AI, I would consider it “created”. I have personal opinions on using AI to come up with ideas, but ultimately I don’t think that’s necessary to disclose via tags.
As a reader, I do really appreciate this being a topic of concern and discussion for the admin team!
I know you wrote this while really mad. And I don’t want to use my position to dunk on you aggressively just because you let your anger drive the car, so to speak… I apologize in advance if any of this feels harsh. I tried to limit that harshness to the best of my ability.
Look, I do understand this sentiment, but… I think an enormous boon of erotica is being able to not take yourself seriously. It’s weird to try to morally shame erotica writers for… not taking things seriously? And this is a site that welcomes first-time writers as well as the veterans. It is a sandbox in which all varieties of writers are welcome to play.
So like… This stance feels more like a fun launching point from which to begin an abrasive screed—rather than an insightful assessment of what this site is actually doing, or how many of its best authors often don’t take themselves seriously either. It also comes off as oddly incurious and disinterested in what the site actually is in totality, even if that wasn’t your intent.
Are we making an excuse for laziness? Is that what we were doing, in this whole thead?
Or are we, maybe, choosing a pragmatic option, divorced from our own personal feelings on the subject? We are trying to manage a complex and entirely volunteer-run project (aka, the site) and we are looking to adapt it to the realities of how people from all sides are engaging with it in the present moment.
You’re very angry. I understand. You are also talking a lot about the noble art of writing, which… okay. But you are mistaking our careful and extremely resource-driven compromises towards functionality as, like… ideology or personal morals. I think your anger may have also misled you to misinterpret a lot of things actually said in this thread. Anger can make it easy to respond to a strawman instead of a more nuanced reality—I know this from experience. It can make it difficult to untangle that strawman from the reality, too. I hope you are able to do so, in time.
I hesitated on whether to respond to this part, but I feel like I do want to nitpick this. Why is it important for someone to have rigorous artistic standards in how they are getting off?
Like, I could (and do) accept an ethical perspective which objects to using certain material for masturbation fodder—such as AI. But that isn’t what you’ve stated here. Instead there’s a concern over artistry.
Again, it feels like this is divorced from the reality of the site, and in a way that pays a lot of disservice to it too.
In my experience, both as a writer and a reader, what makes erotica effective is less the artistic gloss and polish, and more its ability to capture certain fantasies potently. Some writers have this ability in spades. Even when their spelling or grammar is rudimentary, their erotica sizzles and works. (Some of the most revisited stories on this site fit into this category.) Meanwhile a lot of writers with, say, nuanced characterization and brilliant plots and a rich prose style are, also, not so great at writing satisfying erotica. They are different talents.
I would say that, with erotica, the standard is “are you getting some readers off, and how effectively are you doing it?”
A more convincing argument against AI in this context would be, genuinely, that it is not very good at getting readers off. That’s also likely why so many readers dislike it.
If AI was actually very good at it, I think a lot less people would mind reading AI stories—ethics be damned. Unfortunately many people are ruthlessly pragmatic in this way when it comes to their porn.
Okay, this is the part where I struggled to be less biting. My apologies:
If you would be willing to join our volunteer team of approvers and dedicate your time to pursuing every edge case where a story might contain AI, and then correspond with the author to figure this out (and also thoroughly test numerous AI detectors to ensure you have a semi-reliable way of detecting AI, and thus don’t accidentally ban a human-written story due to being overly rigorous in your application)—which is what we would need someone to do, each day, for 4+ hours each day, in order to actually enforce an AI ban in a realistic way—then please DM me and we can discuss it further.
I think it has been pretty evident, in all the posts made in this thread, that we’re discussing pragmatic, real solutions that can be implemented by the limited team of volunteers who keep this site running daily for the reader-base’s enjoyment.
And as also mentioned in this thread, I made the poll because I wanted to take the temperature on how the reader-base feels about AI stories, because I wanted to check if some of my proposed new rules for AI would feel too restrictive. I did expect the results of this poll to look like how they do.
But I was not going to make a sweeping policy off of a hypothesis that might be totally wrong. It felt important to make the poll and verify my hunch before proceeding. (And again, this is something I’ve already stated in this thread.)
I get that many people have strong opinions on this. And we encourage suggestions for solutions—but not if the solutions have the vibe of “just build a house on the moon!” in terms of realism.
If we ban AI, people will keep submitting AI. They just won’t tag it as such. Do you volunteer to be the solution to this problem, and dedicate the rest of your days to obsessively monitoring all submissions for the site to ensure your proposed rule is followed?
From a lot of trial and error, I’ve learned that the way to enforce a rule in a situation like this one is to give some outlet to the steam—not to try stoppering all the holes entirely.
The AI tags are a way to give an outlet to the steam. And it’s a solution which should also give readers an easy way to opt out of reading those stories, if they’d prefer to.
It is, in intent, a pragmatic compromise to a complicated situation.
Again, this is bringing in an oddly puritanical notion of self-improvement and moralizing to, like… a context where none of this makes coherent sense.
I know you were really angry when you wrote this. And my feeling is that you perhaps carried over some feelings you’d had from other conversations, in other places, to this specific conversation. I’m glad you were able to share your perspective. But the way you’d phrased your perspective was also incredibly condescending to the site, its authors and its admin team—which includes many people who have no warm feelings towards AI whatsoever.
But we are interested in keeping this site running and operational, regardless of our feelings on AI. That does not seem to be something you have any actual interest in, based on your post.
You asked for a temperature check on the subject, you got it. The poll was on whether people want to see it, not whether readers and writers on the site have ‘realistic solutions’ for an admittedly difficult issue for you to implement. So please don’t reiterate how mad you know I must be as if that changes the point I’m making. It’s a cliche to fire back with ‘I’m not mad on the internet!!1!’, but in this case I genuinely am not, lol. Believe me if you like.
But you’re right, if you outright banned AI people would just lie and post it anyway. Because at its heart generative AI is fraudulent. When I talk about being ‘serious’ and having standards, I’m not talking about grammar or spelling or even coherence, I’m just talking about putting your own thoughts out there and letting people read it. That’s what makes this site fun. Please don’t strawman me by claiming I’m demanding unrealistic, professional-level artistic standards from amateurs and first-time writers.
Listen to the words people use when discussing AI writing. Insulting. Plagiarism. Sludge. Poison. People aren’t talking like this because they’re ‘mad’, it’s because there’s something wrong about it.
If you think that openly permitting generative AI and carving out a niche for it will work out for you, fine. It sure hasn’t worked for any other site that’s gone whole-hog on allowing it (DA, Pixiv, Twitter), at least from my viewpoint. Those sites are basically unusable now, and goes back to a permissive upload policy. The best way to ensure that people using AI go elsewhere is just to say that they’re not welcome. If you don’t want to go that far, I don’t blame you, but you’ll be the ones sorting through more and more crud as time goes on.
I’m just expressing my opinion, and I think it’s one shared by a lot of people who regularly visit this site. I sympathize with you wanting to keep the site running smoothly, I hope it keeps going for a long time. That’s all I’m gonna say, even if it sounds like you’ve already made up your mind.
I kinda don’t get the point of posting AI generated writing publicly because like… if I want to read an AI generated story then I’ll just generate one myself, that’s the whole point of AI. I’ve seen some people argue that it takes a lot of work to create a good prompt but acting like that is equivalent to the process of writing is ridiculous, and at that point might as well just post the prompt instead of the “story.”
It’s unfair that actual authors putting in time and effort have to share this space with pump and dump AI crap BUT from a logistical standpoint, there’s no reliable way to enforce an anti-AI rule for stories on this site so I think the AI-generated/assisted tags are a necessary evil to at least try and contain them. I think the pro-AI crowd are correct that we need to accept that AI is here to stay, but I think these same people also need to be respectful of the fact that they are intruding on OUR space and not everyone wants to read their slop.
My pie in the sky solution would be to have a separate site for AI generated works- works posted there can be cross posted to the other sites as normal, but readers on the other sites have the option to filter out stories posted on the AI sites. Submissions to the AI site are limited (to avoid flooding) and any author that submits an AI generated story automatically has any future stories tagged as AI unless they specify during their submission that it is original text. I’d also suggest that there be some sort of way to flag a story for review as potentially AI generated but (as I can personally attest to) this would obviously be messy.
That’s a fair point, however there is also just a constant flow of stories, there are days when the approver queue is over 20 stories so any story published before that just gets buried.
I sometimes see a story and wish it could be bumped back up to the top for a second chance but that wouldnt really be fair either
Yes lol I am aware of the wording of the poll that I made, and which I worded that way for a specific reason. I’m not responding to you voting in the poll option, which I made for you to vote in. I’m responding to your post, and to what you actually said.
My observations that you seemed like you were writing from a place of displaced anger were an effort to grant you some empathy and grace while accounting for your condescending tone and your accusations. I’ll take you at your word, then, and retract that effort.
[And then I went and seriously edited this next bit, as it went into unnecessary detail. You hit on a pet peeve of mine in your reply, and in retrospect I don’t know that it accomplishes much for me to take you to task for it excessively.]
I take no issue with your conclusions or opinions about AI. I do take issue with some of the logic you used to get to that conclusion, in particular basing a moral stance around the language people use to describe their feelings on AI. There are many sound arguments against AI. The fact that some people have a disgust reaction—that they consider it sludge, poison, et cetera—really isn’t one. I think it is in our interest, as gay men, to avoid centering our moral decisions in that sort of thinking. It’s the exact same logic that people use for validating their knee-jerk disgust responses to us, and for their homophobia. Let’s not argue from that place.
And when it comes to making rules on the site, we are definitely not going to be arguing from that place. As I said, there are far, far better arguments against AI. This is really not one of them.
I feel the need to ask: have we said we’re going to go whole-hog on allowing it? My intentions as expressed in this thread have been pretty transparent, I think, in that I confirmed we’re testing the waters for being far more restrictive about AI content, and the poll was to ensure this aligns with the general mood of the reader-base.
I believe I have been clear throughout this thread. Your responses gave the impression that you did not really read or register the majority of the discussion. When I say you are responding to a strawman, that is what I refer to—that you are responding aggressively to things we did not say, and sometimes to things that are the opposite of what we said.
I appreciate the sympathy and I’ll reiterate we are a volunteer effort, and if you do feel strongly about this and feel you have the time, I would encourage you to reach out to me via DM about maybe volunteering too. I do mean it! Yes, right now we are not seeing eye to eye, but you seem passionate about the site. I am always glad to be able to translate passion into action.
I’ll add that I am very interested in people’s opinions, but I am also interested in improving things for our volunteers, which has made me less patient with strident demands that feel disinterested in how any of those demands would be achieved. Whether or not we’d want to institute a full ban on AI stories is irrelevant if we don’t have the manpower or time to realistically enforce it.
I am aiming for a solution that threads the needle. and keeps the site usable in the long run. We may disagree in what is workable. I would hope, though, that we agree our goals are the same.
(And I’ll add, if anyone else would be interested in volunteering just in general, DM me! I think a lot of us are passionate about keeping this site running. So I am always happy to expand the volunteer team, and we’re long overdue in doing so!)
I don’t read AI stories, not out of some moral sense but in the fact that they tend to be quite bland.
Are there AI stories better than human ones published here? To me absolutely but to echo some of the sentiment from earlier in the thread, to me writing is a task you put effort into, it’s a result of writing and rewriting, learning and growing, if you want to have a computer do it for you, what’s the point?
Why share something you didnt make? I cant imagine that being very satisfying.
in my experience AI stories dont do as well, they get lower scores and less engagement
I’d rather read kinklings first erotica than generic AI stuff, sure it might not be good at first but imagine where you’ll be in a year? two? You could be the next big writer for the site, or even beyond it. No one starts out good. In terms of writing higj scoring erotica some of us stay bad at it
I think it’s worth noting that the main points which the site leaders have been discussing so far have been more detailed and strictly enforced rules related to AI stories (and modernizing said rules since AI has changed since they were originally formulated), not a more permissive environment for them.
Ideas raised so far have included using AI detection software to supplement the approvers’ instincts about untagged submissions, no longer permitting them as contest entries, and limiting the number of AI stories an author is allowed to post in a given timeframe. While none of these rules would ban AI stories outright, it’s demonstrably true that people will post AI stories and lie about it whether or not they’re banned. It’s maybe possible that an outright ban would reduce the influx of AI stories to the site, but it’s also possible that giving authors the option of honestly tagging AI stories would reduce the load on the approver team more effectively (and I’m inclined to believe that the people working behind the scenes on the site have a better grip on what would be most effective).
GSS is, as you said, a community for writers, but it’s also a community run by a small team of volunteers. I agree with Soren tbh, who said that his main concern is “lessening the burden on our approval team.” I suspect some people on this thread would prefer if no AI writing existed on the site at all—I know I would—but making that happen practically is a different story, and focusing on the latter isn’t really AI apologia. Making more detailed/strict rules and giving the mod team more clearly defined ways to respond to untagged AI usage (and AI spam) is also definitely a movement towards “less AI on the site,” not more. Framing that discussion as “the leaders of [GSS] making excuses for laziness” comes across as pretty combative, and at odds with the actual content of the discussion.
I agree with this. In my experience, AI generated stories are a fun toy for personal use but don’t really have staying power without significant human intervention. They tend to lose track of a narrative and can’t describe the in-between bits–my favourite part of a story–with tolerable quality.
However, I think a blanket ban with software enforcement is more likely to alienate community members and encourage people to attempt to pass off AI written stories as human generated, rather than eliminate AI created content on the site. In my opinion, allowing tagged AI stories is healthier in the long term, as Archive of Our Own is doing. As Corin pointed out, users can easily filter the tag out of their feed, and it would also give the moderation team the “this is AI gen, goodbye” lever to pull on mistagged stories.
I deeply dislike the idea of running suspected AI gen stories through an external cloud-enabled program, for the same reason I resist using LLMs or generators for any serious task: I can’t see its brain, and I don’t know what it’s doing with my data. Perhaps instead a second reader system where a second member of the moderation team has to review a potential AI story before bumping it from the queue might be an effective substitute?
Yeah, Derek’s point from yesterday made me wonder on whether I should rethink that process… I wasn’t thinking about the AI-detectors training on the text fed into them. I’m not really very tech-savvy, unfortunately.
I do wonder if the new process might instead be something like “hey, we’re pretty sure this is AI and we’re going to tag it as such—if you feel we’re doing this in error, let us know and we can run it through the AI detector to confirm whether it’s human-generated.”
I’m aware that this, unfortunately, could lead to a weird theoretical situation of putting an author into the bind of “okay I know I did not use AI to write this, but now you want to feed my story into AI to prove this??”
So far our track record on identifying unedited AI stories has been really solid, and (as far as I know) we haven’t missed on identifying one yet. But there’s always a first time for everything… So I’m not entirely sure what approach to go with. I’ll have to think about it.
A workaround is that most word processors have edit tracking associated with them. This is imperfect if the person wants to stay anonymous, but you could ask them to provide a file with the edit history if they’re suspected of using AI.
Right, and that flow is from 1/3 to 1/2 AI content that requires little to no effort to create and upload. I hate having to compete for readers with someone who types a few sentence of a plot summary. Sure, you’d have some bad actors uploading AI stuff still, but it would deter quite a big chunk of them
An area that’s been giving me some indigestion with the use of AI (hereafter, large language models or LLMs, since those are the algorithms under discussion here) for gay content, specifically, is that a lot of LGBT content already has to conform to a lot of the moral and aesthetic tastes of straight people. LLM has that built in on two sides.
Even though OpenAI is headed up by a gay billionaire, its products are strictly built for a business-oriented, heterosexually-focused arena, and most of its competitors are controlled entirely by straight people or built for the “mainstream” straight society. I am reluctant to hand another means of defining our culture (the production of art, specifically literature) over to straight people, as they already have too much power in defining what it means to be gay, and have proven repeatedly to be irresponsible (sometimes to the point of genocide) in using it. I think it’s very reasonable, given the retreat of liberalism in most of the developed world, to expect some LLMs to not only censor our content, but perhaps even report our intent to create it as a threat to the moral order of straight people that want to hurt us.
Even when the censorship systems are removed, the majority of the “LGBT” content they have scraped is not necessarily “our” content, by volume. Most M/M romantic and sexual written content is actually produced by non-men, for a female audience, and this is even more so for commercially successful M/M content. That’s going to be the majority of the content scraped to feed the intakes of LLMs, if it isn’t excluded by default as being unacceptable inputs for the model design. In some ways, explicit, noncommercial erotic content is one of the only areas of art that gay people produce for gay people and where the aesthetic preferences of straight people are excluded, rather than pandered to. We self-censor already in a lot of the content we produce, simply because it’s baked in to our culture to be in a defensive crouch against the censure of straight people. I don’t want to see that reinforced by LLMs deciding the outer limits of “gay” content is where two femboys exchange longing glances so a woman can self-insert with them. Particularly if that leads people (who might not have figured out they’re gay) to think that’s all it means to be gay.
Hypothetically one could create an LLM that is designed around this problem, maybe. But I have no faith in any of the major actors to undertake that project, and the culture of the modern tech sector generally is very, very bad at respecting cultural boundaries and practicing the kind of self-reflection that is necessary to do that work well. It is more likely that any work done in this area, even if done in good faith, will become tools that straight people use to define “acceptable gay content” versus “deviant gay content,” so they can censor, harm, or demonize the latter.
LLMs won’t, and perhaps can’t, stop us from writing and creating and producing culture. There will always be people who produce it, just for them, just because they can. But it can overproduce to the extent that our culture gets diluted in its products, making it harder to grow and thrive in an environment that’s already too defined and policed by straight people. It can discourage people from producing art, since it will have to compete in a brutal attention economy with high-volume LLM content, and that’s just a very discouraging place to begin for new entrants.
I support where Nu is on this for GKS. I think we have a thoughtful approach and that self-disclosure is the only responsible way to balance the situation GKS is in. We didn’t create these LLMs, but people are using them, will continue to use them as long as billionaires shovel titanic amounts of money into subsidizing them, and as long as they are deemed morally and legally permissible to use. We can’t control those inputs, only our responses to them. Requiring users to identify AI content when submitting it seems to be the only transparent, workable way forward that I’ve seen proposed, particularly on a site where volunteers like me have to review the content and adjudicate whether it meets our rules.
To be honest, I think this is a moot point given that the text is scrape-able. I think Corin made a good argument that it’s going to be scraped if it’s posted publically. It’s not like we’re providing them with new information.
I trust the human instinct much more than I trust an AI detector. What you do with that gut feeling… I don’t know. Tag it and inform the author, so they can untag it if they didn’t use AI? Email them and ask? Reject it from the queue for being unlabelled AI, and be open to being wrong if they resubmit it?
We do have the privilege of having relatively low stakes here, in terms of the consequences of a false positive (which seems to be the MIT concern - the detectors incorrectly flagging human content as artificial). We’re not running a university or job fair, and like you’ve said the intuitive human pattern matcher Mark 1 is our first and sometimes only screen when doing our work. The detectors are probably overkill when a human determines a given piece of content does not pass the smell test, but it gives us an external tool we can use to validate suspicions.
At present, the advice we’ve given each other is that if an approver thinks a piece is untagged LLM content, we can reach out to the author about it. If it otherwise does not meet publication standards (e.g. ChatGPT prompts in the body of the text…) we can just reject it and add it to the list of things that need to be cured before we publish it. I don’t want to be on the receiving end of a false-positive check myself, but I’d certainly understand the need for such a check and be bemused/confused if someone thought my smut was generated by an LLM.
I’ve actually had people comment on my political posts on Reddit saying that I was probably just a ChatGPT replyguy because their “mark 1 pattern matcher” assumed the particular “policy grad school memo” style I use when writing in that arena was … well, stilted and artificial and using made-up words. They just haven’t read as many policy memos or interpreted as many opaque regulatory jargons as I have, and didn’t notice the tone shifts between formal and casual that LLMs tend not to produce unless prompted.
For our case, it’s smutwrights reading and analyzing smut, so our pattern matchers can be quite refined. But I don’t mind having a tool I can use to validate or invalidate my pattern matcher, because I’ve been wrong before and people have been wrong about me.