Pipeline release! nf-core/scnanoseq v1.0.0 - nf-core/scnanoseq v1.0.0 - Titanium Toad!
Please see the changelog:
-read-sequencing -seq -cell
How Long Have You Been Bally
Bei unsereins kommt ja noch etwas erschwerend hinzu: Friendica und Hubzilla haben keine Trennung in Folgende und Gefolgte. Das ist hier wie auf Facebook alles eins zusammen. Es gibt eigentlich nur beidseitige Verbindungen.
Will sagen, wenn einem jemand von Mastodon folgen will, dann geht das nur in Form einer beidseitigen Verbindung, also Zurckfolgen. Und so "folgt" man irgendwann haufenweise Leuten, die eigentlich gar nichts Interessantes zu posten haben und einen eigentlich nur zumllen, nur damit die einem folgen knnen.
Was soll man da machen Sich zumllen lassen Verbindung wieder lschen Dann knnen sie einem nicht mehr folgen, und sie merken das. Blockieren Dann knnen sie einem auch nicht mehr folgen, und sie merken das. Stummschalten Dann knnen sie einem berhaupt nichts mehr schicken, auch keine Kommentare oder DMs.
Da bin ich froh ber Hubzillas Berechtigungseinstellungen. Da kann ich nur denjenigen erlauben, mir ihre Posts zu schicken, von denen ich Posts sehen will. Kommentare und DMs werden separat geregelt. Dazu habe ich dann auch noch Filter pro Kontakt. Die einzigen, die das merken, sind die, die Zustellungsberichte haben. Mastodon hat die nicht.
CC:
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Friendica #
Hubzilla #
BerechtigungenMastodon is increasingly moving from #
NoAltTextNoBoost to #
CrappyAltTextNoBoost, and I can see it move further to #
NotEnoughAltTextNoBoost.
It is moving from only ostracising people for not providing image descriptions past ostracising people for providing useless image descriptions towards ostracising people for providing AI-generated image descriptions because they're at least partially wrong. The next victims may be people whose image descriptions leave out elements in the image which others may deem necessary to describe.
As quality requirements for image descriptions are being raised, I can't possibly lower the quality of my own image descriptions. If anything, I'll continue to upgrade my own image descriptions to stay ahead.
This is also why I'm worried about moving the long descriptions from the post text body into linked external documents. Not having certain descriptions and any explanations anywhere in the post anymore may backfire, and the external documents themselves may not be accessible and inclusive after all.
Interestingly, this is not congruent with what I read from actually non-sighted people. They don't even seem to care for accuracy which they can't verify anyway as long as the image description is funny and/or whimsical. Since it seems to be exactly that what AI delivers, it's no wonder that many blind people prefer image descriptions from BeMyAI over image descriptions from human experts.
I think I'll keep on writing my monster descriptions, two for each original image. If any of you who aren't sighted don't like them for not being whimsical enough, feel free to ignore the hours or days of work I've put into them, fire up your AI and have your own image description generated.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Mastodon #
MastodonPolice #
FediPolice #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Blind #
VisuallyImpaired #
A11y #
Accessibility #
AI #
AIVsHuman #
HumanVsAI Embrace-Extend-Extinguish.
Embrace = they join the Fediverse.
Extend = they introduce stuff that's non-standard and not really compatible with what already exists. Everything else has to adopt that non-standard stuff or risk becoming incompatible. Since they're one of the biggest players or
the biggest player, they can get away with it.
Threads has yet to do this. Mastodon has been doing it all the time.
Extinguish = They decide they don't care for interoperability anymore. They become incompatible with everything else. As big as they are, they get away with it. But they basically kill everything else off because all that "everything else" can hardly survive without being connected to that one big player.
Mastodon has more power to do this than Threads because Threads is a late addition to the Fediverse, Threads isn't even really welcome in the Fediverse, only a tiny fraction of Threads even federates, and I guess that to more people than not,
Mastodon is the Fediverse.
(Public reply because the original post is public. I'm on Hubzilla and not on Mastodon, and on Hubzilla, all replies always have the exact same permissions as the original post. Permission changes within a thread are not supported.)
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
EEE #
EmbraceExtendExtinguishHow long is the finance sector How about Short Term Trading Finance
Now that you're fully informed, don't miss this amazing video on Finance Sector Short Term Trading Finance . With over 1622 views, this video offers valuable insights into Finance. CashNews, your go-to portal for financial news and insights.
So you think AI is always better at describing and even explaining
any image out there than
any human, including insiders and experts on the very topic shown in the image 100% accurately And at a higher level of detail than said humans
Well, .
I'd like to see AI identify the place shown in the image as the central crossing at BlackWhite Castle and identify BlackWhite Castle as a standard-region-sized sim on Pangea Grid, a virtual world or so-called "grid" based on OpenSimulator.
I'd like to see AI explain the above, all the way down to a level that can easily be understood by someone who has only got a rough idea about what virtual worlds are.
I'd like to see AI correctly mention the pop-cultural links from this sim to German Edgar Wallace films and Frhstyxradio that are obvious to me.
I'd like to see AI correctly identify the avatar in the middle. By name. And know that identifying the avatar is appropriate in this context.
I'd like to see AI know and tell the real reason why the avatar is only shown from behind.
I'd like to see AI recognise that the image was not edited into monochrome, but it's actually both the avatar and the entire sim with everything on and around it that's monochrome.
I'd like to see AI transcribe text that's unreadable in the image. 100% accurately verbatim, letter by letter.
I'd like to see AI identify the object to the right and explain its origin, its purpose and its functionality in detail.
I'd like to see AI discover and mention the castle in the background.
I'd like to see AI accurately figure out whether it's necessary to explain any of the above to the expected audience and, if correctly deemed necessary, do so. And explain the explanations if correctly deemed necessary.
I'd like to see AI know and correctly mention which direction the camera is facing.
Finally, I'd like to see AI automatically generate
two image descriptions, a full and detailed one with all explanations and a shorter one that can fit into 1,500 characters minus the number of characters necessary to mention the full description and explain its location.
, I did all of the above. And more.
In fact, if AI is supposed to be better than me, I expect it to identify
all trees in the image, not only the mountain pines, to give an even more detailed description of the motel advertisement and to give a much more detailed description of the map, including verbatim transcripts of all text on it and accurate information on what place is shown on the map in the first place.
If AI is supposed to be better than me, I expect it to
- describe, explain and transcribe everything that I describe, explain and transcribe
- describe, explain and transcribe even more on top of that
- even more accurately than I do
- more whimiscally
- and in much fewer characters.
All, by the way, fully automatically with no human intervention except for maybe a simple prompt to describe the image for a certain Fediverse project.
#
Long #
LongPost #
CWLong #
CWLongPost #
OpenSim #
OpenSimulator #
Metaverse #
VirtualWorlds #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
A11y #
Accessibility #
AI #
AIVsHuman #
HumanVsAIAllegedly, a "good" advice for image descriptions is always to describe images like you'd describe them to someone on a landline phone.
Sorry, but that's non-sense. At least for anything that goes significantly beyond a real-life cat photo.
If you describe an image through a phone, you describe it to
one person. Usually a person whom you know, so you've at least got a rough idea on what they need described. Even more importantly, you can ask that person what they want to know about the image if you don't know. And you get a reply.
If you describe an image for a public Fediverse post, you describe it to
millions of Fediverse users and
billions of Web users. You can't know what they all want, nor can you generalise what they all want. And you can't even ask one of them what they need described before or while describing, much less all of them. In fact, you can't ask at all. And yet, you have to cater to everyone's needs the same and throw no-one under a bus.
If I see a realistic chance that someone might be interested in some detail in one of my images, I will describe it. It won't be in the shorter description in the alt-text instead, it will be in the long description which I've always put directly into the post so far, but whose placement I'm currently reconsidering. If something is unfamiliar enough to enough people that it requires an explanation, I will explain it in the long description.
Right now, only meme posts are an exception. They don't need as much of a visual description as long as I stick to the template, and a poll has revealed that people do prefer externally linked third-party explanations over my own ones blowing the character count of the post out of proportion. This is the one time that I can safely assume that I actually
know what most people want.
#
Long #
LongPost #
CWLong #
CWLongPost #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Inclusion #
A11y #
AccessibilityPipeline release! nf-core/mag v3.1.0 - mag 3.1.0 - Pink Panda!
Please see the changelog:
-read-sequencing -sequencing
To put it bluntly, Solid is Tim Berners-Lee's attempt at re-inventing wheels named Hubzilla and nomadic identity.
Unlike Solid, Hubzilla is neither a vague concept, nor is it in its earliest alpha stages of development. Hubzilla was launched as early as 2015, it has just released version 9.4.1. It is based on something called the Red Matrix from 2012. The idea of nomadic identity itself is from 2011. So it's a tried-and-true solution and well-established. Not to mention that all this was created by a
protocol developer,
Mike Macgirvin .
When Tim Berners-Lee invented Solid, he didn't know about the Red Matrix or Hubzilla or the concept of nomadic identity because none of that stuff had been advertised anywhere.
He has learned about Hubzilla and nomadic identity in the meantime. But now it's too late to stop, abandon Solid and support Hubzilla and its successors instead. So he keeps pushing Solid forward, knowing pretty well that he's directly competing against Hubzilla and its successors, the streams repository from 2021 and Forte which was created in July.
Truth be told, Solid can't hold a candle to Hubzilla and its successors. Hubzilla and (streams) are both still based on protocols of their own while also being able to communicate through ActivityPub optionally. But on (streams), nomadic identity via ActivityPub seems to be already implemented. And Forte only uses ActivityPub anymore while still being fully nomadic.
And so it can only survive by advertising it to the general public before they learn about Mike Macgirvin's creations. And about 75% of the Fediverse has never even heard of the name "Hubzilla", including many developers. The successors are even more obscure.
Unfortunately, Mike Macgirvin officially retired from Fediverse development on August 31st. While he is still occasionally submitting code to both the streams repository and Forte, the times when he worked on them just about everyday are over. And unlike Hubzilla which has been passed on to the community in 2018, the streams repository is struggling to find new main maintainers, Forte even more so because it doesn't have any users yet AFAIK.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Solid #
RedMatrix #
Hubzilla #
Streams #
(streams) #
Forte #
NomadicIdentity It's in maintenance mode, .
I believe so. Last I heard someone inherited the code base and its repo, without necessarily wanting to, and recently shut it down (IIRC). Meanwhile the old flagship instance and domain firefish dot social has been dead a long time.
Yup. Kainoa, original developer of Firefish, dropped the project into 's lap, but none of the official servers.
- The old repository is dead Naskya copied everything over to .
- The old official lighthouse instance is dead Naskya set up instead.
- The old official website is dead Naskya did not replace it.
Calckey died when Kainoa turned it into Firefish.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Calckey #
FirefishPipeline release! nf-core/taxprofiler v1.2.0 - v1.2.0 - Bouncy Basenji 2024-10-03!
Please see the changelog:
-reads -classification -profiling
Da zuviele Radfahrer da drauen gleichgltige, ignorante Arschlcher sind, die das Fahrrad nur als "mechanische Gehhilfe" sehen, fr die keine Verkehrsregeln gelten, weil fr Fugnger ja auch keine gelten. Und Regeln fr ein gutes Miteinander schon mal erst recht nicht.
Klingeln mu man ja angeblich nur, wenn man gar nicht durchkommt, damit die Fugnger, die vor einem laufen, zur Seite gehen. Also: Wenn man durchkommt ohne Klingeln, dann klingelt man nicht, weil man keinen Bock drauf hat. Auch wenn man bei ber 30 km/h mit dem Lenkerende zwei Zentimeter neben einem Fugnger vorbeirauscht.
Eine Glocke kostet auerdem nur Geld und ist Mehrgewicht. Und ein Fahrrad im Wettbewerbstrimm, also bis auf Bremsen ohne jegliches Anbauteil, das fr den Straenverkehr eigentlich Vorschrift ist, ist sehr viel cooler als eins, das den Vorschriften entspricht. Und leichter und damit potentiell noch schneller.
Da man Leute vielleicht auch mal auf sich aufmerksam machen sollte, das geht vielen einfach nicht in den Kopf. Deswegen fahren ja viele auch stndig ohne Licht. Sie glauben, der Scheinwerfer ist nur zum Ausleuchten der Strecke da. Aber: "Ich seh doch was, ich brauch kein Licht."
Was denen nicht in die Birne geht, ist, da die Beleuchtung am Rad noch viel mehr dafr da ist,
gesehen zu werden. Wenn sie so rumfahren, wie sie rumfahren, ohne Licht, ohne auch nur einen einzigen Reflektor und dann auch noch in dunklen Klamotten auf einem dunklen Rad, dann sind sie eben fr Autofahrer kaum zu sehen.
Und wenn sie dann ber den Haufen gefahren werden, dann motzen sie ber die bsen Autofahrer. Genau deswegen fahren diese Vgel brigens konsequent auf dem Brgersteig, wenn kein Radweg da ist: Sie hatten schon oft genug gefhrliche Situationen mit Autofahrern, schieben die Schuld aber immer auf die Autofahrer.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
RadfahrerThis, by the way, is a very good litmus test for whether someone who is involved in virtual worlds in some way actually knows a thing or two about the virtual worlds:
What kind of avatars, do you think, will people want to create Digital copies of themselves Or, given the opportunity to do so, something entirely different
Brand-new relevant rant article by myself:
Also, relevant shared post with a link to New World Notes:
Watch: Zuckerberg Unveils Horizon Worlds Updates: "Improved" Avatars (Except Not), and Hyperlink-Like "Primitives" (Sounds Familiar!)
#
Long #
LongPost #
CWLong #
CWLongPost #
SecondLife #
OpenSim #
OpenSimulator #
Horizons #
HorizonWorlds #
Metaverse #
VitualWorlds #
Avatar #
Avatars I would give an extensive, detailed, accurate truthful visual description anyway. "Extensive" as in "absolutely gargantuan like you wouldn't believe" as I always do for this kind of image.
I
might add for deaf-blind users that nothing in the image is real, physical and tangible. But I guess they can figure it out themselves.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
DeafBlind #
A11y #
Accessibility That'd still be making stuff up. I only describe what's actually really there, and what everything actually is like. You can't touch them, you can't feel them, and "what if" has no room in my image descriptions.
Besides, no, I don't know what unreal surfaces would feel like, also because they lack any and all properties of real-life surfaces.
And if I had to think up what literal dozens of different surfaces in one image feel like, it'd take me even longer to write my image descriptions. And it already takes me from several hours upward to describe one image.
#
Long #
LongPost #
CWLong #
CWLongPost #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
VirtualWorlds #
A11y #
AccessibilityIn an ideal Fediverse, I'd provide at least four to six different descriptions per original image so that as many people get a description that's as close to what they need as possible. And I'd link to them, even though that makes things more inconvenient for reasons I've already explained, and hope I'll get away with it.
Of course, this isn't feasible.
Still, I do have to find a way to satisfy as many people
on Mastodon specifically as possible. Even if they aren't interested in what I write about, they may still cut into my reach.
It's incredibly difficult to get away with sub-standard accessibility for images on Mastodon. You can quickly be ostracised for not providing an alt-text as well as for providing an alt-text, but it isn't useful enough because it carries too little information about the image.
This starts at being lectured about having to provide good alt-text. It continues via refusals to boost posts with undescribed images. And it ends at people who post images without describing them being muted or even blocked. If your only connection on a particular instance blocks you, you also disappear from the instance's federated timeline, and your posts may no longer be delivered to those on that instance who follow a hashtag that you use.
Besides, reputation matters. If you don't take accessibility by Mastodon's standard seriously enough, and your posts appear on Mastodon often enough, you're likely to earn a bad rep as someone who's lazy and careless and basically ableist. Even useless alt-text is ableist by Mastodon's standards.
I already have a very active member of Mastodon's alt-text police among my connections, so I can't neglect this.
On the other hand, I don't think you'll get a bad rep for trying too hard, for doing over 1,000% of what's required and still trying to improve and optimise and max out your accessibility game.
What I can say is that I'm not constantly being scolded for giving too long and too detailed image descriptions, nor am I scolded for parking them in the post. I guess my alt-texts already keep the alt-text police satisfied for now, and they should still know that if my alt-texts aren't sufficient, there's still the long description.
The
"I have no interest in your topic, and don't even read your posts, but you violated this rule that I arbitrarily made up, and I will force you to change even if your actual followers don't even want it."
people may still cut into my reach, so I can't ignore them. Besides, they don't enforce rules which they themselves have just pulled out of thin air right there right then. They enforce rules which the greater Mastodon community has already firmly established, and which Mastodon is expected to live by. And when I say "Mastodon", I mean the typical Mastodon user's perception of Mastodon which includes everything that happens on Mastodon, regardless of where it comes from.
You aren't exempt from these rules just because you're on Hubzilla. Besides, Mastodon users can't see where you are anyway. So they'll assume you're on Mastodon unless they know better.
I'd say my current ways of describing and explaining my in-world images have been working quite well. I usually don't get any feedback, but this also means I don't receive any negative criticism about neglected accessibility.
If I were to do any major changes such as moving the full descriptions into linked external documents, it'd be a gamble.
For one, the technical side is largely untested. All I know is that a blind user once told me that a Hubzilla Article didn't work with her screen reader, so Hubzilla Articles may actually not be accessible at all. I'm not going to post the majority of my future images on Hubzilla anyway, only on (streams) which can make Mastodon blank them out, and (streams) doesn't have the Articles app.
So all I could do would be a simple HTML document uploaded to the Files app that contains the full description, maybe with the described image embedded as you've suggested. However, I don't know how which browser on which OS handles an HTML document dealt to it not by a Web server, but by a file server. Will it open and display the document, or will it download the document to the device like any other file without opening it
Besides, what else is untested is the acceptance of linked image descriptions, especially when they're the only available source of transcripts of the bits of text in an image. After all, this means that something that's defined as absolutely mandatory, the transcripts, is not available in the post at all, only in some external document that requires extra effort and, on mobile devices, an extra app to open.
Externally linked descriptions may improve the acceptance because they drastically shorten the image posts, and they make image posts with multiple images within the post text possible in the first place.
But they may just as well have the opposite effect because they remove a detailed image description from the post as well as any and all explanations and, worst of all, every last text transcript, and instead, they park everything someplace that requires extra effort and, on phones, letting another app open to access them.
Yes, Mastodon doesn't like long posts. Some Mastodon users actually block everyone whom they catch posting over 500 characters upon first strike. But this can be partially mitigated with a long-post content warning.
And there are
way more complaints about missing or useless image descriptions than about too long posts. So I dare say the former cut into your reach more deeply than the latter.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta Well, it isn't just about the size and shape. That'd be easy because these virtual worlds consist of 3-D models.
It's more about what it feels like to touch someone's skin, a piece of someone's clothing, the surface of an object. It's that kind of tactile information that many deaf-blind people go after, and it's only that kind of tactile information that they're interested in.
But how am I supposed to describe what it feels like to touch something that can't be touched because it simply doesn't have a physical real-life existence
Okay, so that avatar is wearing a tweed jacket. But that tweed isn't really tweed fabric. It's a digital painting of tweed fabric on an object which actually has no material whatsoever and no physical properties whatsoever. You can't reach out your hand and lay your fingertips on that jacket and feel it.
Even if that jacket's surface was bump-mapped or normal-mapped, which it isn't, it still couldn't be touched and felt.
Now, someone could suggest I should describe what things would feel like if they were actually real in the physical realm. But I prefer my image descriptions to be accurate and, most importantly, truthful.
And besides, what about objects that do not have a texture that represents any real-life material Not only are they without any physical properties, but they don't even suggest any life-like physical properties. Even I don't know what it'd feel like to touch them.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
VirtualWorlds #
DeafBlind #
A11y #
AccessibilityStill, my intended target audience is very widespread, regardless of whether I'm posting about OpenSim, or whether I'm posting about the Fediverse.
I mean, sure, I could write my OpenSim posts in such a way that only those who know OpenSim inside-out because they're active users understand them. It'd be easier for me.
But there are probably fewer than 50 registered users in the Fediverse who know more about OpenSim than that it exists, if even that. Of these, maybe half a dozen is active. That wouldn't exactly be a big target audience.
At the same time, there are lots of people out there who receive my posts, too, be it by following me, be it by following one of the hashtags I use, be it by discovering one of my posts on their federated timeline.
Now, these people may pretty well have their curiosity piqued upon receiving one of my posts with an image from within a virtual world, the kind of which they would never have imagined exists since the Metaverse hype ended.
At the same time, the post doesn't tell them
anything about this kind of world. Nothing is explained, and nothing is described, or if something is described, it's the very bare minimum requirement for the most basic accessibility. Even if they're sighted, they'd really like to know what kind of place this picture is from, and what the picture actually shows. There may be items in the picture that they've never seen before.
It's even worse if they're blind. Sure, you may say that blind or visually-impaired people have no use for virtual worlds, thus, literally not a single one of them can even be interested in that topic.
I wouldn't count on that. I'd say a blind or visually-impaired Fediverse user may just as well be curious about virtual worlds as a fully sighted user, including what everything looks like. If they are, they've got the same right to learn about it as a fully sighted user. Everything else would be ableist.
Both need my help. No-one else can really help them. No, not Google either. Not on this level of detail, and I'm not even talking about the first hits for OpenSim probably leading them to the human body simulation of the same name.
Both need explanations. And the blind or visually-impaired user needs visual descriptions. Only I can provide them on a sufficiently detailed and sufficiently accurate level.
Not providing sufficient description and explanation for image posts on this level of obscurity is about on the same level as not providing any image description at all whatsoever for images in general. Hubzilla doesn't care. Mastodon, on the other hand, calls that ableist. And Mastodon calls it out.
My target audience for posts about the Fediverse, including memes, is even wider. It's basically everyone, especially my target audience for Fediverse memes.
But not everyone is on the same level of knowledge. And if an image is included which is almost always the case in my meme posts, I have to describe the image anyway.
I guess you've read that shared post by . Image descriptions are not only good for blind or visually-impaired people, but they're also good to help sighted people understand an image which they wouldn't understand without the description.
So I describe
and explain the meme image.
But explanations don't belong into alt-text. I've already written about that.
So I describe the meme image in the alt-text, and I explain it in the post where explanations belong.
Sure, I could say, "If they don't get it, they don't get it. I don't care." But doing so would make these posts inaccessible to the vast majority of their intended target audience.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMetaFrom 28 Sep: Can addressing gut issues treat long COVID in children - EnlargeFrazao Studio Latino/ Getty Images Four years after the outbreak of the ... -19 -covid
Dolphins In Depth: Can Tyler Snoop Huntley save the Dolphins season
Add your location to a Google Map Scott M. Stolz First of all: If your target audience is mainly on Mastodon, and you don't want to lose any more reach than necessary, you have to play by Mastodon's rules, whether you want or not.
My target audience
is mainly on Mastodon, and I've already lost too much reach.
This means three things among several others.
One, there
must be a sufficiently useful image description in the alt-text, full stop, no exception. For at least some Mastodon users, it doesn't matter how many image descriptions you've got elsewhere. Especially not if you have to hide your post behind a summary/content warning because it exceeds 500 characters, and they'll see your image before your post because they're still on Mastodon 4.2 or older which keeps images outside of content warnings.
Believe me, I spend more than enough time lurking on mastodon.social to know.
Two, if you put a post with more than 500 characters out in the open with no content warning to hide it,
droves of Mastodon users will block you. Do that often enough, and it'll dramatically cut into your reach on Mastodon. Remember, Mastodon is your target audience.
Three, you must protect people from potential triggers and sensitive images. On Mastodon, that isn't limited to porn and gore. It includes eyes. Eyes can trigger autists and other neurodiverse people. Not only in facial portraits of people looking straight into the camera, by the way.
Point three, by the way, means you can't safely post pictures with faces or eyes in them from Hubzilla to Mastodon. Hubzilla has no way of making Mastodon blank sensitive images out.
A summary/content warning only helps on Mastodon 4.3 which hides images under content warnings. But Mastodon 4.2 or older, which means one solid half of the whole Fediverse, only hides the post text under a content warning and leaves the images dangling below the hidden post. And when a post comes from outside, the images are not blanked out. They remain in everyone's plain sight.
My meme images almost always have faces in them. The OpenSim in-world pictures I want to post are likely to have faces in them, too. So posting them on Hubzilla is out of question.
I've launched two (streams) channels for these while you were away:
(streams) automatically sets Mastodon's proprietary sensitive flag on images if they're included in a post that contains the hashtag
#nsfw
or
#sensitive
. Hubzilla doesn't have this feature.
However, (streams) neither has the Articles app, nor does it have the Webpages app. If I wanted to use either, I'd have to post images on (streams) and then link to an article or a webpage on this very Hubzilla channel.
Now to some of your specific points.
Image
ALT tag ONLY if different than the short description.
Again, alt-text
always. Unless you don't care for your audience on Mastodon. Again, I do.
So the short description under the image is redundant.
You don't have to repeat what the text says in the meme.
The transcript of the meme text belongs into the alt-text.
Any text transcripts belong into the alt-text.
The only exception in my case are if there's too much text, and it doesn't fit into the alt-text. In this case, the transcripts go into the long description, and the alt-text mentions that the long description contains all text transcripts.
But the alt text would need to be longer since it would need to explain what the meme is. The alt text and the short description would not be the same.
Alt-text must
never explain
anything. No information must
ever be exclusively available in alt-text.
That's because not all people can access alt-text.
- Certain frontends don't support alt-text.
- People with a severe tremor can't operate digital devices with a steady enough hand to be able to access alt-text.
- Alt-text requires handling pointing devices. This means that quadriplegic people who use their computer by poking the keyboard with a headpointer strapped to their forehead or with a kind of pen that they hold in their mouth can't access alt-text either. They operate their computer only by keyboard.
- And so on.
This means that explanations must be in a place that can be accessed by anyone who can access the post itself. This applies to anything that's part of the post text body, for example.
image (alt text = "An image about topic."
Nitpick, maybe, but starting alt-text with "image about", "picture about" or "photo about" = baaaaaaaaaaaad style. Other media are at least tolerated, but "image", "picture" and "photo" are deadly sins.
Basically, when you have super long descriptions (over 500 to 1000 characters), you should treat it as if you are promoting a blog post on social media (rather than the whole thing only exists on social media).
My image descriptions don't make for good blog posts.
I have put image descriptions into articles twice already:
- , early 2023, highly outdated
- , 2023
You can see that they don't work as stand-alones that I could show random strangers on the Web.
If I were to do it again, this would be the style, only with a more up-to-date description.
Speaking of which, let me show you how I currently actually handle them.
First of all, some meme posts.
My first attempt at an updated meme post was in early August.
Back then, I was still convinced that people find links to external information sources cumbersome and inconvenient to the point of being not accessible and uninclusive, and that they wanted everything explained in the post.
So I did. And I ended up with a whopping
nine explanations:
- what the image itself is about, namely the teething problems of the implementation of decentralised IDs as per FEP-ef61 as one of the first steps of adding nomadic identity via ActivityPub to the streams repository
- the "One Does Not Simply Walk Into Mordor" meme template
- snowclones
- image macros
- advice animals
- Something Awful
- 4chan and, generally, imageboards
- nomadic identity
- Hubzilla, (streams) and their whole history from Mistpark to (streams)
All so that everyone in the Fediverse would easily have a chance to understand the meme post.
However, these nine explanations are over 25,000 characters long. The last two explanations are half of it.
I couldn't really imagine that everyone actually wanted to wade through so much text in one place. I couldn't really imagine that the six meme explanations in the post should be more accessible than which, through its own links, would explain stuff like snowclones as well.
And so I ran a couple of weeks ago. Call the poll rigged because I mentioned which way of explaining a meme would end up with how many estimated characters. I call it honest.
Without the character count, I guess just about everyone would have wanted me to explain everything in the post. That is, Mastodon users would have wanted me to explain everything in the alt-text Mastodon does
everything image-related in alt-text because that's how Mastodon does it, and because you can't do much with only 500 characters.
But
with the character count,
nobody wanted me to explain things in the post. Not if that meant 10,000+ characters.
As you wish. So I switched to linking what I could possibly link.
My next meme post, was very simple, so one link to KnowYourMeme for the meme template was sufficient. Otherwise, it went with no explanation at all.
required an explanation for the image itself, but in addition to that, a link to KnowYourMeme was enough.
, my latest meme post, required some more explanation and a few more links. I had to explain the whole situation which the meme references.
The explanation had to include a very brief explanation of (streams) because the link to the streams repository (no. 3 out of a whole lot of links) isn't informative enough for someone who barely knows the Fediverse beyond Mastodon.
Firefish required a short explanation of the situation plus a link to the EOL announcement plus a link to the official Firefish repository plus a link to the official Firefish lighthouse instance.
/kbin required another short explanation of the situation plus a link to the repository to show for how long it has been abandoned plus a link to the dead website plus a link to the dead lighthouse instance plus a link to a post on Lemmy that gives some more insight.
In addition, at the top, there are two links to KnowYourMeme because I've combined two memes.
I could say that I've got meme posts down pat now. It only gets a bit difficult if I have to explain some more in the post because there are no external explanations for certain aspects of the post available. But I don't have to re-phrase entire KnowYourMeme pages anymore since people have voted against it.
My original in-world images are trickier because they work differently.
For one, they require much, much more detailed visual descriptions. Meme posts based on known and established templates hardly require any visual description. My in-world images, on the other hand, I have to re-paint with words because
everything in them may be interesting for someone out there.
There also tends to be more text to transcribe. The basic rule is, "If there's text in the image, it must be transcribed." The rule does not handle any edge-cases, so I assume they have to be transcribed, too, if I can transcribe them. And I can. And "important within the context" doesn't apply if
the image itself is the context.
Besides, they require much, much more explanation by me. There are far more things about them that the average casual on-looker is unfamiliar with. Thus, they have to be explained. But there are far fewer useful explanations for them readily available on the Web, if any. And before I force people to scrape together useful and comprehensible information from , I rather explain OpenSim myself in a way that
people understand right away. And with fewer characters for more useful content.
I've written an article about this: .
My most recent OpenSim image post here on Hubzilla is from around April. It's only slightly outdated.
In numbers, it has
- one image
- one alt-text of precisely 1,500 characters, of which 1,402 characters are image description, and 97 characters are a too short note where a full description can be found
- a full description of 60,553 characters in the post, right below the only image, including all necessary explanations and transcripts of all text within the borders of the image
The post is an absolute
monster. It's my second-longest post to date. grew even longer at over 78,000 characters I wrote it a year ago, and it contains three highly detailed images. But none of the image descriptions in that post manage to exceed 60,000 characters. And this post only has faint traces of a description preamble. Also, it places each description below the corresponding image.
This excessive size is one of the reasons why I considered parking the description someplace else.
My most recent OpenSim image post went to (streams) two months ago. .
It contains two images, and yet, it's much shorter. I've managed to reduce the necessary description effort by reducing the scenery in both images as much as I possibly could. Also, both images show the same avatar in the same outfit which I only had to describe and explain once.
This time, I put both images below one another in one block. The image descriptions are their own block right below. Also, this time, the preamble with common explanations for both images is the largest part, also because it contains the common description and explanation of the avatar for both images.
In this case, this was simply the easiest way to do it because both images have so much in common. However, this does not translate well to individual image descriptions per image, linked below the images themselves. If I were to do that, each image description would have to contain all necessary description elements and all necessary explanations. It might largely end up copy-paste, but still.
This was easy to do for another reason: The actual post text is only one block. The images are separate from it.
Now, however, I'm planning to write a blog-style post with up to four images within the post text, as in surrounded by post text above and below. I want to write a few things about the images that are
not image description in the alt-text/accessibility sense. And that has to be close to each individual image.
This excludes individual long image descriptions per image right below each image. This also makes a big image description block at the end inconvenient because the described images would be so far away from their descriptions.
And that's why I considered putting the raw image descriptions into external documents without, however, completely overthrowing my way of describing images. I've honed it to a high level of quality over almost two years, improving my description style from post to post. And I'm fully aware just how detailed image descriptions for images from virtual worlds have to be.
I just need to find a way to make it work with a blog-style post with several images embedded within the actual post, both without ripping the post apart by placing humongous chunks of image descriptions into it and without placing the descriptions too far from the images within the post.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Memes #
Hubzilla #
Streams #
(streams) #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMetaIch schtze, am liebsten wrde auch Heise seine Instanz dichtmachen und dann immer und berall behaupten, das Fediverse sei ja eh tot, weil es fr "die Leute" viel zu kompliziert ist. Dann bruchte man sich nicht mehr drum zu kmmern und knnte es komplett ignorieren, wie sie es schon seit sptestens 2012 mit diaspora* tun.
Wre da nicht , der sich wirklich engagiert, der auch mal in Dialog tritt, der auch mal experimentiert, der wahrscheinlich alleine fr mehr Traffic sorgt als die ganze Schweizer Regierungsinstanz insgesamt.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMetaScott M. Stolz Disadvantages:
- That'd be three separate and different descriptions per image. One in the alt-text. One redundant one under the image with no added value. And the long description.
- Remember that even my "short" image descriptions are generally over 1,000 characters. That'd be quite a chunk of text under the image.
- Even a very bare basic explanation of what OpenSimulator is would be too long for a "short" below-the-image caption. And it would look stupid both under one image and not under the others and under all images.
- Full-blown Web page only works on Hubzilla with the Webpages app. This excludes all portraits or other images with eye contact because Hubzilla cannot make Mastodon blank them out (eye contact can trigger neurodiverse people). (streams) can, and so I only post such images on (streams), but (streams) has no Webpages app.
- The alternative, an HTML document in the file space, is theoretically possible on (streams) as well, but entirely untested in practice.
- A plain text file won't work as a fallback because I can't embed an image in a plain text file.
- Due to the embedded image in the external document, each link to one of these documents would likely require its own content warnings, e.g. eye contact.
It's basically option 3, but even worse.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta Another alternative would be to deactivate activity pub (optional) (because it is toxic) and rely on the Diaspora / DFRN.
This would basically create three parallel "Fediverses".
One, the ActivityPub-based one which could be considered "bugged" and illegal as per GDPR because data sniffers would be everywhere.
This part of the Fediverse would quickly wither away and die for one sole reason: Mastodon, which makes up over 70% of today's Fediverse, is developed in Germany, and mastodon.social, which makes up over 20% of today's Fediverse, is hosted in Germany. Germany is a member of the EU and thus GDPR area. In fact, Germany was a driving force behind GDPR. Mastodon, in its entirety, would basically become illegal in its own country. Unless, of course, Gargron took the step to move Mastodon over to the USA entirely and shutter Mastodon gGmbH.
Two, the "safe" Social Web based on the diaspora* protocol and consisting of diaspora* itself, Friendica, Hubzilla and Socialhome. Many users of the latter three (if Friendica still offers this option, and if Socialhome introduces it) will turn ActivityPub off and cut off all ties to the ActivityPub-based Fediverse.
Three, the "even safer" nomadic grid of Hubzilla and (streams), based on the Zot6 protocol. Both definitely do allow ActivityPub being off, and on Hubzilla, it is off on new channels by default.
I guess both would feel quite some relief when they are no longer bound to a "Fediquette" defined entirely by Mastodon users who barely or not at all know that the Fediverse is not only Mastodon. Especially the several public Hubzilla hubs in Germany, including the two biggest ones, would be quick to turn ActivityPub off on a hub level because leaving it on would be illegal suddenly.
I'm not sure what'd happen to
Mike Macgirvin 's latest fork, Forte, which is (streams) with Nomad and Zot6 removed and nothing but ActivityPub. On the one hand, like (streams), it has an unparalleled permissions system. On the other hand, ActivityPub
itself would become inherently unsafe. Besides, even something as basic as direct messages aren't necessarily that private if Pleroma admins can choose to circumvent direct message privacy on a per-instance level. And yes, Pleroma has that option.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
ActivityPub #
Mastodon #
Pleroma #
diaspora* #
Friendica #
Hubzilla #
Streams #
(streams) #
Forte #
Zot #
Zot6 #
GDPRDolphins In Depth: Dolphins hosting QB audition, working to find next starter
for Long, : National Weather Service: WARNING in this area until 9:30 PM EDT. Take shelter now in a basement or an interior room on the lowest floor of a sturdy building. If you are outdoors, in a mobile home, or in a vehicle, move to the closest substantial shelter and protect yourself from flying debris. Check media. Source: NWS Charleston SC , ** DO NOT RELY ON THIS FEED FOR LIFE SAFETY, SEEK OUT OFFICIAL SOURCES ***
for Long, : Warning issued September 26 at 9:03PM EDT until September 26 at 9:30PM EDT by NWS Charleston SC Source: NWS Charleston SC , ** DO NOT RELY ON THIS FEED FOR LIFE SAFETY, SEEK OUT OFFICIAL SOURCES ***
for Liberty, , , : National Weather Service: WARNING in this area until 11:45 AM EDT. Take shelter now in a basement or an interior room on the lowest floor of a sturdy building. If you are outdoors, in a mobile home, or in a vehicle, move to the closest substantial shelter and protect yourself from flying debris. Check media. Source: NWS Charleston SC ** DO NOT RELY ON THIS FEED FOR LIFE SAFETY, SEEK OUT OFFICIAL SOURCES ***
for Liberty, , , : National Weather Service: WARNING in this area until 11:45 AM EDT. Take shelter now in a basement or an interior room on the lowest floor of a sturdy building. If you are outdoors, in a mobile home, or in a vehicle, move to the closest substantial shelter and protect yourself from flying debris. Check media. Source: NWS Charleston SC , , , ** DO NOT RELY ON THIS FEED FOR LIFE SAFETY, SEEK OUT O
for Liberty, , , : Warning issued September 26 at 11:11AM EDT until September 26 at 11:45AM EDT by NWS Charleston SC Source: NWS Charleston SC , , , ** DO NOT RELY ON THIS FEED FOR LIFE SAFETY, SEEK OUT OFFICIAL SOURCES ***
Like !
Their interest is hegemony.
They won't be able to achieve that. For that, they'd all have to join forces. Instead, the big players would rather bash each other's heads in.
If Zuckerberg wanted to buy out the Fediverse through the SWF and assume full control over it, that'd go hard against what Flipboard wants, what Ghost wants, what
Mastodon wants etc.
If you think Zuck and Gargron will join forces in EEE'ing the Fediverse, I have a bridge to sell you. If Zuck wants to EEE the Fediverse, this includes Mastodon which is why everyone panics. But Gargron himself wants to EEE the Fediverse, too. Mastodon users don't notice it, but everyone else does, from Misskey to (streams). And if he could, that'd include assuming indirect control over Threads by forcing Mastodon's proprietary, non-standard solutions upon it.
Write.as, maintainer of WriteFreely, is a supporter of the SWF, too. WriteFreely is basically one of the Fediverse's alternatives to Medium. Notice how Medium itself is a supporter. Medium won't try to build a monopoly and get rid of WriteFreely through the SWF if WriteFreely itself has a saying in this. And so does Automattic because Medium competes against WordPress.
Mastodon vs Write.as + Medium + Automattic = "The Fediverse is for microblogging, we won't support more than a tiny subset of HTML, and the rest of the Fediverse has to adjust" vs "Everything in the Fediverse must support full HTML rendering for Article-type objects."
And the SWF won't grant voting power according to how much money someone invests into it. Meta won't be able to out-vote everyone else by pumping a billion into the SWF. If that was possible, , as the very founder of the SWF, would have no voting power at all.
Lastly, these are the
supporters. The financial backers. Not the only members and even less the only members the SWF will ever have. It's just kind of difficult for a Fediverse project with nothing but one or two spare-time devs behind it to join the SWF.
CC:
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Mastodon #
MetaPlatforms #
Threads #
Automattic #
WordPress #
Medium #
Write.as #
WriteFreely #
SWF #
SocialWebFoundation #
EEE #
EmbraceExtendExtinguish And this could actually improve with more disabled Fediverse users because that'd mean more people who could tell us what they really need.
Ideally, they'd also not flock to only Mastodon, stick to Mastodon and largely ignore the rest of the Fediverse. Ideally, some of them would test Friendica and find it enough to their liking to start a public forum about accessibility in the Fediverse because that's something that needs to be discussed in a place that supports actual discussions between more than just two people.
My original images are extreme edge-cases in terms of describing them. My way of describing them is mostly based on assumptions and erring on the "too much" side to be safe. These assumptions include, but aren't limited to:
- Blind or visually-impaired people aren't necessarily satisfied when you mention that something is in the image. They need to know what it looks like, and if they don't know what it looks like, they need a detailed description.
- If sighted people explore images with their eyes and take in all the details, regardless of what's important in the context, then blind or visually-impaired people want to be able to do that, too. Thus, images that may be interesting enough for whatever reason need full descriptions of all visible details, for any definition of "visible".
- In the Fediverse, people generally prefer detailed descriptions. The more details, the better.
- In the Fediverse, nobody minds if you fill the Mastodon/Misskey limit of 1,500 characters for alt-text to the brim.
- Describing the camera height, the camera angle and the position of the Sun, if applicable, is at least a good idea.
- Having to transcribe all text in an image means having to describe literally every last bit of text that at least partially exists somewhere in the displayed scene within the borders of an image. Regardless of whether or not the text is readable in the image, at least if you're able to source the full text.
- At least some blind people don't have a concept of image resolution. If you write in your image description that something is too low-res to clearly identify or read, they may take it for an excuse to be lazy.
- If your image shows something that people aren't familiar with, not only does it require a detailed visual description, but you also have to explain it. (By the way: Explanations always go into the post and never into alt-text because not everyone can access alt-text.)
- Mastodon absolutely requires a sufficiently useful image description in alt-text, even if you state both in the alt-text and in the summary/content warning that there is a full, detailed image description in the post (and, in the case of the summary/content warning, how long the image description is). So each image that requires a detailed description actually requires two descriptions, namely a description in the alt-text that's as long and detailed as possible within the limits of what Mastodon, Misskey and their forks don't chop off in addition to the full, long, detailed one in the post.
Based on these assumptions, I tend to spend an eternity describing one single image, and thus, I only rarely post images in the first place. I often don't even take images if I think it's impossible for me to sufficiently describe them. Virtual world pictures come with the disadvantage of showing people whole new worlds and making them want to explore these new worlds through these images, so based on my assumptions, .
For example, I've used my channel picture with that huge, weird building in it before. I went to three or four other places before, and this was the only one that I deemed possible to describe. It still took me two full days to research and describe it. The result was an alt-text of exactly 1,500 characters, including a bit over 1,400 characters of
short description, and a full description of over 60,000 characters. And that was even after actually limiting myself to what can be seen if you're in that place, as opposed to describing images within that image with details that could only be seen if you teleported to the places shown in these images.
It's probably hard to say I haven't tried hard enough, but I guess hardly anyone has ever opened the summary/content warning, and hardly anyone ever will. My image descriptions may be useful for a few neurodiverse users who actually need such detailed descriptions and explanations, and they may be good for silencing the "alt-text police" which seems to increasingly go for not only alt-text availability, but also the quality of image descriptions. For most blind or visually-impaired people, they may be of only marginally more use than no image description, if at all.
By the way, I wanted to post another picture of the same building from another angle. The reason why I didn't was not because it would have been too tedious to describe it. It was because there are pictures in that building with human faces on them, and from that other angle, they became visible, for any definition of "visible" anyway. That would have meant a potential eye contact trigger by the definition I know.
So I didn't post that second image because Hubzilla gives me no means of hiding an image from users of Mastodon 4.2 or older or users of many mobile Mastodon apps. I've since started two (streams) channels on which I post images with faces or eyes because (streams) support's Mastodon's non-standard sensitive image flag. And I've considered replacing all my various profile pictures in the Fediverse with pictures with no face in them, although just about everyone else has at least one face in their profile picture.
It's clear that assumptions can be bad both ways. It's bad to assume that colours are useless in image descriptions, but all the things I assume up there lead to image descriptions that may be similarly useless, but due to their overwhelming length.
Disabled people want us to make things accessible to them based on what they actually want, not on what we assume they want. There's that slogan: "Nothing about us without us." They want us to listen to them.
However, it's impossible to listen to them . Right now, they don't, probably also because they're too few, and they're all only sitting somewhere where meaningful group discussions are prevented by technology. And if you have a question about accessibility in extreme edge-cases like mine, searching mastodon.social for blind users and DM'ing them, half a dozen at once, can't be the right way. Unfortunately, it's currently the only way.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
NothingAboutUsWithoutUs #
Inclusion #
A11y #
AccessibilityYes, s are that and !
I keep seeing painfully useless alt-text again and again myself.
Still, I'm wondering if there's any consent, any standard for alt-text quality that's specific to Mastodon. (I'm saying "Mastodon" because that's the only part of the Fediverse where alt-text matters, by the way.)
Whenever I post an image, rare as this may be due to the immense effort, I try to improve and optimise my way of describing images and making my image descriptions available to Fediverse users.
But there isn't much information available on what people want. I hardly have any sources for such information, I barely get any feedback for my image posts, and that feedback contradicts itself. On top of that, there's absolutely no place in the Fediverse where Fediverse accessibility could be discussed between more than just two users at a time.
So all I can do is try to assume what Mastodon users may want. And I'm very likely constantly stepping on people's toes or even throwing many of those who need image descriptions under buses while doing so.
I typically describe my original images twice, once with a shorter, purely visual description in the alt-text, once with a very long, very detailed description in the post that also contains all text transcripts and all necessary explanations that's also mentioned in the alt-text.
Now that I'm getting closer to finding an optimal way of describing and explaining memes based on established templates, I'm starting to question parts of the above method for my original images.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta Always. No exceptions anymore for I don't know how long. I don't forget to do something that takes hours or days for one image.
They're always 100% hand-written. Maybe copy-pasted from my own descriptions of similar images, but no AI involved.
And I'm constantly trying to improve my description style, so the older an image post is, the less I consider its image description(s) even halfway up-to-date. For example, I'm working on completely re-designing my way of describing and explaining memes.
I'd rather not post images at all than post them with what I'd consider sub-standard image descriptions.
#
Long #
LongPost #
CWLong #
CWLongPost #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta There could be several reasons for this.
One,
"All my friends are here." They don't want to let go of large parts of their online social circle.
Alternatively,
"But muh followers." They don't want to let go of their fame and move someplace that doesn't even have enough users to build up the same numbers of followers in no time.
Two,
inconvenience. The Fediverse is too cumbersome to use in comparison with centralised corporate silos.
You can basically load onto your phone, enter a user name and a password, and bam, you've got a bustling feed.
On the other hand, while you can load Mastodon onto your phone, it asks you too many questions. There's that field where you have to choose an instance. And yes, leaving it on default is a choice you have to take. It doesn't railroad you hard enough.
And once you're online, your timeline is dead. Many have left because they thought the whole Fediverse is dead because their timeline was, because Mastodon was praised to them as, quote, "literally Twitter without Musk". You have to take care of getting content into your timeline yourself, and there's no assistant that helps you with it. Mastodon doesn't have training wheels, everything else in the Fediverse has them even less, but doesn't even require them.
Oh, and you can't load "Fediverse" onto your phone at all. Or anything else in the Fediverse that could largely do the same job as Mastodon if people knew about it.
Third-party apps Most of them have probably never in their lives used an online service through an app that doesn't have the same name as the online service.
Lastly,
it isn't worth it. The only thing that may motivate them to move would be a place that's totally guaranteed to be an absolute, 100% safe haven for them with no harassment, no ableism and no hindrances in accessibility. Same goes for other marginalised minorities such as BIPoC or the 2SLGBTQQIA+ community, only that spoons aren't as much an issue for them.
However, these latter two may actually be known even on to not be perfectly safe in the Fediverse either. The very reason why BlackMastodon failed. So neither may be disabled people. Thus, the effort of leaving everything behind and learning to use new software might not be justified by the result if nothing much changes anyway.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Disabled #
DisabilityTwitter #
Mastodon #
Fediverse