Thank you very much for taking the time and writing up all these explanations that give a lot of context to the public. That is much appreciated.
Just to bring the matter forward (a strike should only be a temporary action in order to facilitate some kind of solution, possibly a compromise), it would be good to separate all the involved issues as much as possible. I hope we all can help there.
One issue seems to be possible plagiarism of AI generated content. That should be clarified, i.e. if AI generated content is staying, is it necessary and sufficient to simply cite the generator or what needs to be done there?
Another one is the false positives, that might be difficult to control, especially if complaints aren't followed up all the time. I think it would be helpful if moderators, who know the matter best, make proposals how that false positive rate should be controlled in the future. How should detection of AI generated content ideally look like? What maximal false positive rate are we willing to accept?
many users would merely dispute the findings without much in the way of evidence
I'm not sure what kind of evidence you were thinking of here. I wouldn't know if I were in that situation what kind of evidence was needed. It might be helpful to describe how people should complain about this specific moderator action, what kind of evidence should be presented. It seems like a difficult problem but in this context it must be solved somehow if AI generated content is to remain banned.
I think with the strike putting additional pressure on achieving a better solution, one should also concentrate on actively working towards a compromise. Just waiting for the company to reverse its last decision might not work out if there isn't a better alternative that ideally would take into account all valid arguments.