Find the latitude of any place.  

The First New Long Wave Radio

Ja, aber viele Mastodon-Nutzer sehen Mastodon als den Client.
Fr die sind Pleroma, Akkoma, Misskey, Firefish, Friendica & Co. nichts anderes als Mona, IceCubes, Tusky, Fedilab & Co.: alternative grafische Frontends fr dieselbe Sache. Nur da die ersteren Webfrontends sind und die letzteren Handy-Apps.
Natrlich geht das fast immer einher mit dem unerschtterlichen "Wissen", da Eugen Rochko das Fediverse erfunden hat.
#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #Mastodon #NichtNurMastodon
vanilla mastodon is somehow the only software that doesn't allow admins to adjust the character limit.

They can adjust the character limit. But it requires digging into the source code because the limit is hard-coded. Basically, if you as an admin want a higher character limit, you have to soft-fork Mastodon.
And whenever you upgrade your instance to a new version, it's force-rebased to vanilla, and you have to soft-fork it and apply your changes again.
Better yet: Changing the character limit by modifying the source code is, technically speaking, illegal. It would only be compliant with the GNU Affero General Public License if the admin uploaded the modified code to GitHub as a fork of vanilla Mastodon. But no Mastodon admin even only goes through the hassle of learning git just to have more characters.
CC:
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AGPL #Mastodon #CharacterLimit #CharacterLimits #500Characters And while I'm at it, here's a quote-post of my comment in which I review the second AI description.
It's almost hilarious how clueless the AI was again. And how wrong.
First of all, the roof isn't curved in the traditional sense. The end piece kind of is, but the roof behind it is more complex. Granted, unlike me, the AI can't look behind the roof end, so it doesn't know.
Next, the roof end isn't reflective. It isn't even glossy. And brushed stainless steel shouldn't really reflect anything.
The AI fails to count the columns that hold the roof end, and it claims they're evenly spaced. They're anything but.
There are three letters "M" on the emblem, but none of them is stand-alone.There is visible text on the logo that does provide additional context: "Universal Campus", "patefacio radix" and "MMXI". Maybe LLaVA would have been able to decipher at least the former, had I fed it the image at its original resolution of 2100x1400 pixels instead of the one I've uploaded with a resolution of 800x533 pixels. Decide for yourself which was or would have been cheating.
"Well-maintained lawn". Ha. The lawn is painted on, and the ground is so bumpy that I wouldn't call it well-maintained.
The entrance of the building is visible. In fact, three of the five entrances are. Four if you count the one that can be seen through the glass on the front. And the main entrance is marked with that huge structure around it.
The "few scattered clouds" are mostly one large cloud.
At least LLaVA is still capable of recognising a digital rendering and tells us how. Just you wait until PBR is out, LLaVA.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #LLaVA
#Long #LongPost #CWLong #CWLongPost #VirtualWorlds #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImagDescriptionMeta #LLaVA #AI #AIVsHuman #HumanVsAI Long And since you obviously haven't actually read anything I've linked to, here's a quote-post of my comment in which I dissect the first AI description.
(This is actually a comment. Find another post further up in this thread.)
Now let's pry LLaVA's image description apart, shall we
The image appears to be a 3D rendering or a screenshot from a video game or a virtual environment.

Typical for an AI: It starts vague. That's because it isn't really sure what it's looking at.
This is not a video game. It's a 3-D virtual world.
At least, LLaVA didn't take this for a real-life photograph.
It shows a character

It's an avatar, not a character.
standing on a paved path with a brick-like texture.

This is the first time that the AI is accurate without being vague. However, there could be more details to this.
The character is facing away from the viewer,

And I can and do tell the audience in my own image description why my avatar is facing away from the viewer. Oh, and that it's the avatar of the creator of this picture, namely myself.
looking towards a sign or information board on the right side of the image.

Nope. Like the AI could see the eyeballs of my avatar from behind. The avatar is actually looking at the cliff in the background.
Also, it's clearly an advertising board.
The environment is forested with tall trees and a dense canopy, suggesting a natural, possibly park-like setting.

If I'm generous, I can let this pass as not exactly wrong. Only that there is no dense canopy, and this is not a park.
The lighting is subdued, with shadows cast by the trees, indicating either early morning or late afternoon.

Nope again. It's actually late morning. The AI doesn't know because it can't tell that the Sun is in the southeast, and because it has got no idea how tall the trees actually are, what with almost all treetops and half the shadow cast by the avatar being out of frame.
The overall atmosphere is calm and serene.

In a setting inspired by thrillers from the 1950s and 1960s. You're adorable, LLaVA. Then again, it was quiet because there was no other avatar present.
There's a whole lot in this image that LLaVA didn't mention at all. First of all, the most blatant shortcomings.
First of all, the colours. Or the lack of them. LLaVA doesn't say with a single world that everything is monochrome. What it's even less aware of is that the motive itself is monochrome, i.e. this whole virtual place is actually monochrome, and the avatar is monochrome, too.
Next, what does my avatar look like Gender Skin Hair Clothes
Then there's that thing on the right. LLaVA doesn't even mention that this thing is there.
It doesn't mention the sign to the left, it doesn't mention the cliff at the end of the path, it doesn't mention the mountains in the background, and it's unaware of both the bit of sky near the top edge and the large building hidden behind the trees.
And it does not transcribe even one single bit of text in this image.
And now for what I think should really be in the description, but what no AI will ever be able to describe from looking at an image like this one.
A good image description should mention where an image was taken. AIs can currently only tell that when they're fed famous landmarks. AI won't be able to tell from looking at this image that it was taken at the central crossroads at Black White Castle, a sim in the OpenSim-based Pangea Grid anytime soon. And I'm not even talking about explaining OpenSim, grids and all that to people who don't know what it is.
Speaking of which, the object to the right. LLaVA completely ignores it. However, it should be able to not only correctly identify it as an OpenSimWorld beacon, but also describe what it looks like and explain to the reader what an OpenSimWorld beacon is, what OpenSimWorld is etc. because it should know that this can not be expected to be common knowledge. My own description does that in round about 5,000 characters.
And LLaVA should transcribe what's written on the touch screen which it should correctly identify as a touch screen. It should also mention the sign on the left and transcribe what's written on it.
In fact, all text anywhere within the borders of the picture should be transcribed 100% verbatim. Since there's no rule against transcribing text that's so small that it's illegible or that's so tiny that it's practically invisible or that's partially obscured or partially out of frame, a good AI should be capable of transcribing such text 100% verbatim in its entirety as well. Unless text is too small for me to read in-world, I can and do that.
And how about not only knowing that the advertising board is an advertising board, but also mentioning and describing what's on it Technically speaking, there's actually a lot of text on that board, and in order to transcribe it, its context needs to be described. That is, I must admit I was sloppy myself and omitted a whole lot of transcriptions in my own description.
Still, AI has a very very long way to go. And it will never fully get there.
#Long #LongPost #CWLong #CWLongPost #AltText #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #LLaVA
#Long #LongPost #CWLong #CWLongPost #VirtualWorlds #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImagDescriptionMeta #LLaVA #AI #AIVsHuman #HumanVsAI
Without any context

The context matters. A whole lot.
A simple real-life cat photograph can be described in a few hundred characters, and everyone knows what it's all about. It doesn't need much visual description because it's mainly only the cat that matters. Just about everyone knows what real-life cats generally look like, except from the ways they differ from one another. Even people born 100% blind should have a rough enough idea what a cat is and what it looks like from a) being told it if they inquire and b) touching and petting a few cats.
Thus, most elements of a real-life cat photograph can safely be assumed to be common knowledge. They don't require description, and they don't require explanation because everyone should know what a cat is.
Now, let's take the image which LLaVA has described in 558 characters, and which I've previously described in 25,271 characters.
For one, it doesn't focus on anything. It shows an entire scene. If the visual description has to include what's important, it has to include everything in the image because everything in the image is important just the same.
Besides, it's a picture from a 3-D virtual world. Not from the real world. People don't know anything about this kind of 3-D virtual worlds in general, and they don't know anything about this place in particular. In this picture, nothing can safely be assumed to be common knowledge. For blind or visually-impaired users even less.
People may want to know where this image was made. AI won't be able to figure that out. AI can't examine that picture and immediately and with absolute certainty recognise that it was created on a sim called Black-White Castle on an OpenSim grid named Pangea Grid, especially seeing as that place was only a few days old when I was there. LLaVA wasn't even sure if it's a video game or a virtual world. So AI won't be able to tell people.
AI doesn't know either whether or not any of the location information can be considered common knowledge and therefore necessarily to explain so humans will understand it.
I, the human describer, on the other hand, can tell people where exactly this image was made. And I can explain it to them in such a way that they'll understand it with zero prior knowledge about the matter.
Next point: text transcripts. LLaVA didn't even notice that there is text in the image, much less transcribe it. Not transcribing every bit of text in an image is sloppy not transcribing any text in an image is ableist.
No other AI will even be able to transcribe the text in this image, however. That's because no AI can read any of it. It's all too small and, on top of that, too low-contrast for reliable OCR. All that AI has is the image I've posted at a resolution of 800x533 pixels.
I myself can see the scenery at nigh-infinite resolution by going there. No AI can do that, and no LLM AI will ever be able to do that. And so I can read and transcribe all text in the image 100% verbatim with 100% accuracy.
However, text transcripts require some room in the description, also because they additionally require descriptions of where the text is.
I win again. And so does the long, detailed description.
Would you rather have alt text that is:

I'm not sure if this is typical Mastodon behaviour because it's impossible for Mastodon users to imagine that images can be described elsewhere than in the alt-text (they can, and I have), or if it's intentional trolling.
The 25,271 characters did not go into the alt-text! They went into the post.
I can put so many characters into a post. I'm not on Mastodon. I'm on Hubzilla which has never had and still doesn't have any character limits.
In the alt-text, there's a separate, shorter, still self-researched and hand-written image description to satisfy those who absolutely demand there be an image description in the alt-text.
25,271 characters in alt-text would cause Mastodon to cut 23,771 characters off and throw them away.
#Long #LongPost #CWLong #CWLongPost #VirtualWorlds #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImagDescriptionMeta #LLaVA #AI #AIVsHuman #HumanVsAI
as the addon/app thingie still has to be added by the admin for you to be able to use it

They do come pre-installed on the server.
Hubzilla is installed and upgraded using git. And in doing so, not only is sourced, but so are always automatically and . All that stuff is on the server hard drive from the get-go.
What the admin can do is activate and deactivate access to the addons. In the case of PubCrawl, it's activated after installation by default.
It's not like the hubmin installs Hubzilla without PubCrawl, then clones an independent git repository containing PubCrawl into the Hubzilla server, and then goes to the admin interface to switch it on.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Hubzilla
Prediction: Alt text will be generated by AI directly on the consumer's side so that *they* can tell what detail, information density, parts of the picture are important for *them*. And pre-written alt text will be frowned upon.

Won't happen.
Maybe AI sometimes happens to be as good as humans when it comes to describing generic, everyday images that are easy to describe. By the way, I keep seeing AI miserably failing to describe cat photos.
But when it comes to extremely obscure niche content, AI can only produce useless train wrecks. And this will never change. When it comes to extremely obscure niche content, AI not only requires full, super-detailed, up-to-date-by-the-minute knowledge of all aspects of the topic, down to niches within niches within the niche, but it must be able to explain it, and it must know that and inhowfar it's necessary to explain it.
I've pitted against my own hand-written image descriptions. Twice. Not simply against the short image descriptions in my alt-texts, but against the full, long, detailed, explanatory image descriptions in the posts.
And LLaVA failed so, so miserably. What little it described, it often got it wrong. More importantly, LLaVA's descriptions were nowhere near explanatory enough for a casual audience with no prior knowledge in the topic to really understand the image.
in five seconds are no match against that took me eight hours to research and write.
in 30 seconds are no match against that took me two full days to research and write.
When I describe my images, I put abilities to use that AI will never have. Including, but not limited to the ability to join and navigate 3-D virtual worlds. Not to mention that an AI would have to be able to deduce from a picture where exactly a virtual world image was created, and how to get there.
So no, ChatGPT won't write circles around me by next year. Or ever. Neither will any other AI out there.
#Long #LongPost #CWLong #CWLongPost #VirtualWorlds #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImagDescriptionMeta #LLaVA #AI #AIVsHuman #HumanVsAI Quick and easy solution:
Move away from Mastodon. Go someplace else in the Fediverse that doesn't have a 500-character limit.
Move to Pleroma (5,000 characters, configurable by admin).
Or Akkoma (5,000 characters, configurable by admin).
Or Misskey (3,000 characters, hard-coded).
Or one of the several Misskey forks (thousands of characters again, configurable by admin).
Or Friendica (unlimited characters).
Or Hubzilla (unlimited characters).
Or (streams) (unlimited characters that is, the database can handle a bit over 65,000 characters on older instances with no database upgrade and over 24,000,000 characters otherwise).
All of them are in the Fediverse. The Fediverse is not only Mastodon. All of them are connected to Mastodon. Proof: This comment comes from Hubzilla.
And then do something that nobody on Mastodon would even come up with the idea to do:
Put the excerpt into the post.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #NotOnlyMastodon #FediverseIsNotMastodon #MastodonIsNotTheFediverse #Pleroma #Akkoma #Misskey #Forkey #Forkeys #Firefish #Iceshrimp #Sharkey #Catodon #Friendica #Hubzilla #Streams #(streams) #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #CharacterLimit #CharacterLimits #500Characters

The First New Long Wave Radio Station Of This Millennium -wave

Well, that's what it is. It's the wording chosen back when add-ons were renamed "apps".
Basically, you "install" the "app" on your channel. It integrates the functionality of the "app" into your channel instead of just turning it on, and it adds or makes it possible to add links to the "app" to the navigation bar and/or the burger menu.
I'm not a Hubzilla dev. But it could be a different case of how add-ons work. I guess add-ons on Friendica are integrated into your account right away, you just turn them on and off, whereas on Hubzilla, "installing" an "app" does not turn the existing functionality in your channel on, but it builds the functionality into your channel in the first place. Especially PubCrawl is something that digs deeply into how your channel works, especially how nomadic identity works.
Don't forget that Hubzilla is not Friendica with nomadic identity and wikis and a different protocol. When Mike turned Friendica into Red, he re-wrote the entire backend from scratch.
This wording may actually be as old as Red. Trying to have it changed now would be like complaining that identities plus content are named "channels" on Hubzilla (and everything that has come after Hubzilla) because the term "channels" is being used in different contexts by other projects nowadays. It'd be like complaining that Hubzilla, like Friendica before it, has its own term for a server instance instead of officially calling it an "instance" or a "server".
#Long #LongPost CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Hubzilla Misskey did it right, the Forkeys even more so.
Unlike Mastodon, they didn't aim to be purist Twitter clones. Misskey wanted to do microblogging, but without Twitter's unnecessary limitations and with some cool extra features, partly appealing particularly to a Japanese target audience.
It certainly helped that Misskey was originally launched in 2014, some two years before Mastodon.
Friendica, created in 2010 as Mistpark, is a similar case. It aimed to be an alternative to Facebook, but not a 1:1 Facebook clone. The idea was to build something that does the same thing as Facebook in similar ways as Facebook, but without what limited Facebook and without what sucked about Facebook.
And of course, it got a big pile of cool extra features on top that could be useful.
For example, circles. You may perceive them as either Mastodon's lists done right or a clone of Google+'s circles. Actually, however, Google+ was a clone of Diaspora*, Google+'s circles were a clone of Diaspora*'s aspects, and what are Friendica's circles today used to be Mistpark's groups. And Mistpark pre-dated even Diaspora*, so Mistpark had them first.
Also, next to being an alternative to Facebook, Mistpark was equipped as a full-blown blogging engine. No character limit. Full set of text formatting, up to and including headlines, lists and tables. In-line embedding of images and other media which can be uploaded to the built-in file storage before embedding them. StatusNet's summary field (which Mastodon repurposed into a content warning field seven years after Mistpark's launch in spite of being federated with both StatusNet's successor GNU social and Friendica) and a separate title field. A tag cloud. And so forth.
But we can also see it in the Threadiverse. First there were Lemmy, an attempt at a faithful clone of Reddit that also aimed at replacing Hacker News, and Lotide which fell to the wayside. Shortly before the Reddit enshittification, /kbin appeared. It added some interesting extra features, but it could never really mature due to being overrun at an early alpha stage.
Now we also have Mbin, a /kbin fork that improves on it, PieFed and Sublinks, all of which are more than mere Reddit or Hacker News clones.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Misskey #Forkey #Forkeys #Mistpark #Friendica #Threadiverse #Lemmy #/kbin #Mbin #PieFed #SublinksI'm thinking about adding at least one of my channels to . I mean, it isn't like I don't have enough followers they've risen above 500 again. But Trunk would help people follow me for a better reason than just one cool post or comment, all still without having to figure out how to check my profile.
That said, Trunk requires you to volunteer on at least one list, in at least one topic. That's where things get difficult.

For one, there's . I'm not even kidding: It's a list for people who describe the media which they post. People who add alt-text to their images. Even though everybody in the Fediverse is expected to do it all the time, at least if their posts reach Mastodon in some way.
I do it. But I don't do it "the standard Mastodon way". For one, Mastodon's limitations, especially the 500-character limit for posts, don't apply to me. I don't have any character limit in my posts. Thus, nothing forces me to describe and even explain an image only in alt-text because I've got plenty of space in my posts.
Besides, , especially taking all those typical image description guidelines into consideration. That's because none of them are prepared for the edge-cases that are my images. And with "absolutely massive", I don't mean, "800 characters Are you nuts! Who's gonna read that!" I mean up to , and I can guarantee you this is not a typo. Maybe even more in the future.
I'm not quite convinced that I'm a good example of a provider of media descriptions, partly because by adhering to general image description rules, I break most of Mastodon's image description rules, partly because next to nobody has the patience to read one image description that's longer than 120 toots or have it read to them by a screen reader, partly also because my own image descriptions become obsolete so quickly whenever I discover something new that I should do in image descriptions.
Even if none of this mattered, I don't post images often. Maybe once every couple months. That's because I have to schedule my image posts due to how much time they consume. The 60,000-character description took me two full days to research and write, breakfast to after dinner. And it might become even rarer in the future. I've started to be able to post images with sensitive content, including but not limited to eyes and faces. But posting these will eat up the time I could also use to post perfectly safe images on this Hubzilla channel.
The Described Media list is rather for people who routinely whip up 200 characters of alt-text in under a minute or so, but who do so at least daily.

An even more obvious list, at least at first glance, would be , seeing as the primary topic of this channel is . In fact, in the long run, I could add two or three channels to this list.
But OpenSim does not fit on it. The list is for actual virtual reality, for new virtual reality and augmented reality developments of the 2020s. "The Metaverse" as envisioned by most. It absolutely requires VR or AR headsets, full stop.
OpenSim has been using the term "metaverse" routinely since as early as 2007, the year of its inception. But the list is not about "metaverse". It's about VR.
And OpenSim is what's commonly called a "pancake". It's made for desktop and laptop computers and their 2-D screens. It does not really work on VR headsets. It does not work on stand-alone VR headsets with integrated graphics hardware at all. That's mainly because VR headsets require a constantly guaranteed frame rate of 60fps. It isn't simplified and cartoonish and geared towards mobile graphics hardware like Horizons or Rec Room or the like. Instead, it's largely photo-realistic, high-detail stuff with high-resolution textures.
You may get 60fps out of a dedicated graphics unit on a not-too-highly-detailed sim when you're alone. But have more than a few avatars around, and your fps will drop below 60. Join a party or any other event with a couple dozen avatars, and you're heading for slideshow-level fps. That's because the avatars aren't made by the OpenSim devs and optimised for high performance. They mostly entirely consist of user-supplied stuff and optimised for good looks. Some two years ago, one average avatar had more vertices than an entire scene in World of Warcraft. They've only gotten much, much more complex since then.
A liquid-cooled 4090Ti overclocked to kingdom come won't give you 60fps at 1080p at 's Event Plaza on a Friday night. So, what chances does a stand-alone, passively-cooled headset based on phone hardware have if it has to whip up even more pixels And none of this is even taking recently-introduced Physically-Based Rendering into account which absolutely requires dedicated graphics hardware with no less than 4GB of dedicated VRAM, preferably at least 8GB.
That is, you couldn't use OpenSim on a stand-alone headset anyway. There are only two OpenSim-compatible viewers available right now, they're only available for desktop operating systems, and their highly complex UIs (pull-down menus like you've last seen in Photoshop etc.) are entirely geared towards desktop and laptop computers.
In brief: OpenSim is not VR, and it's unlikely to ever truly become VR.
Okay, I still have the option to ask one of the four Trunk admins to add an extra "Virtual Worlds" list, arguing that OpenSim, just like Second Life, is not VR and thus doesn't fit onto a VR & AR list. But they might argue that it's close enough to VR & AR for a separate list not being justified.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #MediaDescription #MediaDescriptions #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #OpenSim #OpenSimulator #Metaverse #VirtualWorlds #VR #VirtualReality #AR #AugmentedReality #Trunk FenTiger One Fediverse server instance recognising people from other instances being logged in.
For example, here on Hubzilla, I could theoretically allow only confirmed connections to see my profile. So if someone opens my profile page, they won't see anything.
Now let's suppose one of my connections is on something with client-side OpenWebAuth support. Friendica has client-side support, Hubzilla and (streams) have full support. And let's suppose that someone is logged in using the same Web browser that they use to visit my profile page.
OpenWebAuth automatically acts as magic single sign-on. It recognises their login and their identity, and it shows them my profile because I've allowed them to see it. On the very same page on which everyone else wouldn't see anything.
I could give such permission for many other things, i.e. to allow people to post to my wall, to allow them to see my files and images, to allow them to upload stuff to my file space, to allow them to edit my wikis (unfortunately only all of them, not individually for each wiki) etc.
A practical use-case for such permissions in combination with OpenWebAuth would be if I ran a channel as a public forum. In that case, I could give a bunch of trustworthy Hubzilla or (streams) users additional permissions so that they can moderate the forum, at least partly even from their own existing channels.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Hubzilla #OpenWebAuth #SingleSignOn
c) ActivityPub is off by default, not to mention d) how to turn it on which is anything but straight-forward.

Holly cow. I guess I'll stay on Friendica for the foreseeable future :D

To be fair, ActivityPub is off by default because it makes nomadic identity more difficult on the level that Zot6 offers. And it's off by default at channel level only whereas it's on by default at hub level.
With "anything but straight-foward", I mean it isn't like on (streams) where you have an ActivityPub on/off switch in the settings which is usually even on when you first discover it.
Instead, ActivityPub is an add-on, an "app" that has to be "installed". But newbies don't expect Hubzilla to have add-ons at all because Twitter has none, and Mastodon has none. And they certainly wouldn't expect ActivityPub, of all things, to be an add-on and off by default. In fact, most probably join Hubzilla in the belief that it is based on ActivityPub like "everything else" in the Fediverse.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Hubzilla #Streams #(streams) #ActivityPub #NomadicIdentity The only Hubzilla forums I'm aware of that are more or less active are Hubzilla-centric themselves. The Hubzilla support forum which is pretty much useless if you aren't on Hubzilla yourself. Or a kind of planning forum for the advancement of the Hubzilla ecosystem which I'd even expect to have ActivityPub off.
The problem is simply that even only very very few people take the step from one of the ActivityPub-based microblogging projects to Friendica, just to have a forum. My estimation is that every other Mastodon user still "knows" that the Fediverse is only Mastodon. Of the rest, some have never heard of Friendica, and even more don't know that Friendica has forums that Mastodon users can join.
I guess not few of those who try their luck quickly discover that Friendica is, in fact, not a pure group/forum platform, not "Guppe, but it's moderated", and too confusing and difficult to wrap their minds around. The step from Mastodon to Friendica is much bigger than the step from Twitter to Mastodon.
And that's Friendica. It's even worse with Hubzilla.
There has been a poll which showed that 75% of those who took it had never heard of Hubzilla before this poll. And I guess the audience was actually biased, also because it wasn't on Mastodon AFAIR, so the real number has to be even higher. Amongst those who have heard of Hubzilla, its capabilities are even murkier than Friendica's.
Hubzilla's Web UI is basically what Friendica's UI was a dozen years ago, and it capitulates before Hubzilla's sheer amount of features. Its documentation is both incomplete and hopelessly outdated in many parts, and it's written like a tech spec rather than a user manual.
Lastly, most Mastodon-to-Hubzilla direct converts throw in the towel the same day they took the jump. That's because they fail to connect to anyone on Mastodon. That, in turn, is because it takes another, experienced Hubzilla user to tell them that a) Hubzilla is not based on ActivityPub, b) ActivityPub is optional on Hubzilla, and c) ActivityPub is off by default, not to mention d) how to turn it on which is anything but straight-forward.
That's why hardly anyone runs a forum on Hubzilla.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Groups #Forums #Friendica #Hubzilla between Misskey, now dead FoundKey, Firefish, Iceshrimp.NET, Sharkey and Catodon.
Sharkey is said to have a terrible Mastodon API implementation, though.
Also, in case you haven't noticed, is on Friendica which should be compatible with both Mastodon, Misskey and their respective forks. We've had these federated groups for five and a half years longer than we've had Mastodon.
Granted, Friendica does not specialise in groups/forums, but still.
Hubzilla has federated public and private forums, too, with even more features. Its permissions system makes it possible for other Hubzilla or (streams) users to moderate forums by becoming co-admins, not to mention that Hubzilla and (streams) users can moderate their own threads. But I've yet to see Hubzilla forums in action with Mastodon and *key users.
(streams) is similar with even more advanced permission controls, and it doesn't come with quite as many extra optional features that you'll never need as Hubzilla. But it's being redesigned on the inside to introduce nomadic identity via ActivityPub which has recently caused problems in federation with anything that doesn't use (streams)' own Nomad protocol. I'll have to check whether these issues are gone in the current release.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #Misskey #Forkey #Forkeys #Firefish #FoundKey #Iceshrimp #Iceshrimp.NET #Sharkey #Catodon #Groups #Forums #Friendica #Hubzilla #Streams #(streams)

Chillingly, most people who develop long COVID did not have particularly vicious cases of the virus initially.

Thats in part because so many more people experience a mild form of COVID rather than a severe one.

(Across most studies, long COVID risk does increase with the severity of the initial infection.)

And each time people become reinfected with the virus, theyre at risk of developing long COVID,
even if they didnt experience it previously.

The authors note that studies on recovery from long COVID are sparse and inconsistent.

But those that have closely evaluated individual manifestations of the virus have found recovery rates to be fairly low at one year,
and only 7% to 10% fully recovered after two years.

For millions and millions of people, the debilitating effects of long COVID are just that.

The economic toll is its own story.

A Census Bureau Household Pulse Survey in 2022 found that between
2 million and 4 million working-age Americans were out of work because they were sidelined with the effects of long COVID.

Meanwhile 20% of people with long COVID surveyed by the United Kingdoms Trades Union Congress said they were not working.

Another 16% were working reduced schedules.

The $1 trillion estimated annual global economic hit involves Organization for Economic Cooperation and Development (OECD) countries due to
reductions in quality of life and labor force participation,
the Nature Medicine review says.

And that price tag does not factor in the direct costs of healthcare, another likely category of deep financial fissure.

Al-Aly and his co-authors are pleading for governments,
especially U.S. health agencies,
to dramatically upgrade their levels of activity to investigate long COVID,
learn more about its mechanisms and pathways,
and develop vaccines that better block infection.

Theyre also pushing for large-scale platform trials to test multiple drugs simultaneously,
so we can quickly learn what works and what doesnt.

They have an ally in Congress.

U.S. Sen. Bernie Sanders recently introduced the Long COVID Research Moonshot Act,
a bill that would guarantee $1 billion per year for a decade to the National Institutes of Health
to support long COVID research,
the urgent pursuit of treatments,
and the expansion of care for patients across the country.

Sanders news release announcing the act puts the long COVID count in the U.S. at
22 million, including a million children.

Among other things, the bill would require NIH to establish a long COVID and a to speed up clinical trials,
and to make any subsequent treatments so that every patient could receive it.

The legislation that we have introduced finally recognizes that long COVID is a public health emergency, Sanders said.

Congress must act now to ensure treatments are developed and made available for Americans struggling with long COVID.

If it accomplishes nothing else, Sanders proposal may help lift the veil on long COVID in the country and around the world.

Its a topic that has been largely pushed into the shadows,
as governmental agencies
and official policy
tried to construct a reality in which COVID in general was just no longer a significant issue.

That was never the case, and its certainly not the case now.

The reality is otherwise, Al-Aly says.

As the virus mutates into new strains and continues to affect millions, the long tail of COVID is once again forcing itself to center stage.

Multiple COVID subvariants, collectively nicknamed ,
are powerfully present in the U.S.,
and reports from California indicate that some patients are complaining of so strong it feels like theyre
swallowing razors or broken glass,
according to the Los Angeles Times.

Topol, meanwhile, says the Sato Lab in Japan has characterized one of the newest COVID strains,
KP.3.1.1, in a preprint as having
the most immune evasion and infectivity of any of the variants
derived from previous powerful iterations of the JN.1 strain, which was prominent last winter.

Although the Centers for Disease Control and Prevention says that severe outcomes and deaths from COVID have decreased substantially overall from the pandemics early days,

wastewater data shows viral activity is high nationally
and COVID-19 infections are growing or likely growing in 35 states.

More COVID infections mean more cases of long COVID.

And is already exacting an enormous toll on both the people and economies of the world.

Those are words you arent hearing from many government bodies.
But, the researchers say, the evidence tells the story.

Despite the dire impact of long COVID on individuals and society,
I fear that many are still unaware of the danger, says Akiko Iwasaki, professor of immunology at Yale School of Medicine and co-lead investigator of the universitys COVID-19 Recovery Study.

There is an urgent need to provide proper diagnosis and treatment for people living with long COVID.

The authors lay out a number of preventive policy , including
increased use of masking,
improved ventilation systems
and a vaccination program that pairs COVID shots with season flu shots to extend their reach to improve uptake.

But theres a genuine question as to whether enough folks are paying close attention for any of this to matter.

As the authors point out,
a survey found that as of last August,
one-third of American adults still hadnt even heard of long COVID.

In reality, long COVID was identified and defined in the first full year of the pandemic, 2020,
and it has been increasing its case count ever since.

This form of COVID is particularly perilous because, for many people, its symptoms may last years
(or a lifetime)
and their effects may trigger all sorts of associated problems and costs.

Long COVID affects nearly every organ system, the review notes,
including the cardiovascular, immune, gastrointestinal and reproductive systems.

While more than 200 symptoms have been identified, common symptoms include
memory problems,
difficulty concentrating,
fatigue,
heart palpitations,
chronic cough,
shortness or breath
and recurring headaches.

is a $1 trillion problem with no cure.
Experts plead for governments to wake up

For months, governmental officials around the world have appeared to want to forgo discussing the specter of long COVID.

As a new review makes clear, that is wishful thinking
and the latest COVID variants may well kick long COVID into overdrive,
a scenario that researchers and experts have been warning about for some time.

I think they (government agencies) are itching to that COVID is over and that long COVID does not exist, says Ziyad Al-Aly,
director of the Clinical Epidemiology Center at Veterans Affairs St. Louis Health Care System -- and lead author of the review.

It is much more pleasant to pretend as if emergency department visits and hospitalizations havent been rising sharply this summer.

Al-Aly and several other top researchers lay out a difficult truth:
Long COVID has already affected an estimated 400 million people worldwide,
a number the authors say is likely conservative,
at an economic cost of about $1 trillion annually
equivalent to 1% of the global economy.

Moreover, the of a person being hit with long COVID rises with repeated infections of the virus itself,
and recent COVID activity has experts watching closely.

As review co-author noted in a recent blog post,
the current COVID incursion is ramping up quickly,
with one modeler estimating
900,000 new infections per day in the U.S. alone.

The new significant wave, Topol said via X.

Its hard to believe the we are well into our fifth year of this virus and not taking the requisite steps to gain control.

Na ja, es gibt Leute, die von Facebook-Alternativen die Finger lassen wegen der bsen Assoziationen mit Facebook von Cambridge Analytica bis hin zur aktiven Untersttzung des Genozids an den Rohingya.
Aber es ist schon nachvollziehbar: Wenn man sich erst auf Twitter und mglichst originalgetreue Twitter-Klone eingeschossen hat, kommt man mit Sachen, die vllig anders funktionieren, nicht mehr so leicht klar und mu erst umstndlich komplett umdenken.
Zunchst mal: Friendica ist kein Klon des 2024er Facebook. Auch keiner des 2022er Facebook. Friendica wurde schon 2010 entwickelt, und zwar nicht als Klon, sondern als Alternative. Auch nicht als gleichwertige Alternative, sondern als bessere Alternative.
Ein paar Dinge an Friendica, die typisch Facebook und untypisch Twitter/Mastodon sind:

In Teilen ist Friendica aber noch nher an Blogs als an Facebook:

Wer von Twitter und dann von Mastodon kommt, wird sich also von Mastodon nach Friendica mehr umgewhnen mssen als von Twitter nach Mastodon. Aber: Mit "Wieso knnen die das nicht mehr wie Mastodon machen, wo es doch im Fediverse ist" kommt man nicht weiter.
Brandaktueller englischsprachiger Lesetip von , die bisher quasi nur Mastodon kannte, dann aber Friendica ausprobiert hat und so geflasht ist, da sie gar nicht mehr weg will: .
Auch lesenswert ist von von Ende Oktober 2022, als Twitter gerade von Elon Musk bernommen worden war.
#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Friendica
Given that, I'm not sure how long a screen reader takes to read 40,000 character alt-text per image

Don't make the old mistake of taking "alt-text" and "image description" for being mutually 100% synonymous. Alt-text can be more than image description, and an image description does not always go into the alt-text.
The 40,000 characters don't go into the alt-text. If they did, Mastodon would chop 38,500 characters off and throw them away, as would Misskey and all forks of both.
The 40,000 characters go into the post text body. The toot, if you want. Where you have a limit of 500 characters, I have none at all. On Hubzilla, which supports in-line embedding of images, they go directly below the images.
Into the alt-text goes a second, different, separate, independent, much shorter image description.
Each one of my images usually has two descriptions. One full, long, detailed description with all explanations and transcripts in the post text body. One significantly shorter description in the alt-text.
The long description in the post text body is there to deliver all necessary information without Mastodon, Glitch, Ecko, Hometown, Misskey, Firefish, Iceshrimp, Sharkey, Catodon etc. etc. putting the axe to it at the 1,500-character mark.
The short description in the alt-text is there to satisfy those Mastodon users who absolutely demand there be an actually sufficient image description in the alt-text, no matter how. Even if there already is a much longer image description in the post.
After all, they can't see the image description in the post. The post is hidden behind a content warning because it is longer than 500 characters. I always put all posts that exceed 500 characters behind a Mastodon-style content warning that starts with a StatusNet/Friendica/Hubzilla-style summary of the post, followed at least by "CW: long post (<number of characters in the post, raw count> characters)".
Sometimes I take the extra effort upon me to specify how many of these characters are actual post, and how many are image description, which makes it clear right away that there is an image description behind that CW.
What I always do is to the alt-text the extra information that a longer, more detailed image description with explanations and text transcripts can be found in the post.
Depending on how much room the short image description leaves me in the alt-text, I also add where exactly the long image description can be found. And that's two different locations, depending on whether you're on something that supports in-line images (e.g. Pleroma, Akkoma, Friendica, Hubzilla, (streams)) or not (e.g. Mastodon, Misskey and their forks).
The rest of us can just look at the image so we won't use alt-text as we read the post that accompanies the image for context.

On Mastodon, you'll only see the image and a content warning at first. The content warning starts with a summary, unusual for Mastodon, but standard on Hubzilla. Then it warns you about the post being long, including how long the post is. That'll most likely be a five-digit number.
When you open the content warning, you'll see a few hundred or a very few thousands characters of actual post text. Then, after two blank lines, comes the long image description, still in the post. And that's usually several tens of thousands of characters. If the post is recent enough, the image description is announced by a HTML headline so even those who don't have images in the post know where the actual post text ends and the long image description begins.
If you want to know from personal experience what an image post of mine looks like on Mastodon, search for the hashtag #UniversalCampus. The second or third post from the top is a good example. It has a long image description in the post text body of not 40,000, but over 60,000 characters, my longest to date. The alt-text is precisely 1,500 characters long, 1,402 of which are image description.
By the way, is what the self-same post looks like on Hubzilla.
An example with multiple images can be found under the hashtag #OSFest2023. You have to scroll down until you find a post whose content warning starts with the summary, "Classic creations by Arcadia Asylum a.k.a. Lora Lemon/Aley at OpenSimFest 2023". This one is hopelessly outdated now, by the way.
is the Hubzilla original.
#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
zu doof, wenn man eine Suchmaschine nicht mal richtig bedinen kann.

Warum sollten sie nach etwas googlen, wovon sie nicht mal ahnen, da es vielleicht existieren knnte
Und solange du ihnen nicht ausdrcklich sagst, da es noch andere Sachen als Mastodon gibt, die auch im Fediverse sind und sich auch mit Mastodon verbinden, bersteigt das doch komplett deren Vorstellungsvermgen.
Wrdest du von dir selbst aus auf die Idee kommen, nach dezentralen, fderierten virtuellen 3-D-Welten auf Basis freier, quelloffener Software zu suchen, wenn ich dir nicht sagen wrde, da es sowas gibt Jedenfalls ist das Fediverse fr die, die noch nie davon gehrt haben, noch abwegiger und noch unvorstellbarer. Das ist der vierdimensionale Hyperwrfel des Internet.
#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #Mastodon #NichtNurMastodon
Gerade die Unis, ich verstehe nicht, wie man sich mit Mastodon so selber geieln kann Die legen sich selber Hand- und Fufesseln mit so einem begrenzten Dienst an, das man schreiend im Kreis rennen mchte.

Die wissen nicht, da es noch was anderes auer Mastodon gibt.
Weil ihnen nicht gesagt wird, da es noch was anderes auer Mastodon gibt.
Weil die ganzen Mastodonten, die die Unis ins Fediverse holen wollen, ihnen davon nichts erzhlen.
Weil jeder zweite von denen das selbst nicht wei und die andere Hlfte andere Grnde dagegen hat.
Und andere Quellen haben die Unis nicht. Anscheinend blderweise nicht mal die TUs.
#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #Mastodon #Unis4Mastodon #UnisInsFediverse #NichtNurMastodon
Ich bekomme das in den normalen Unterhaltungen mit, das da nur Mastodon als Echt angesehen wird.

Mir hat mal einer ins Gesicht gesagt, da Mastodon das absolut einzige Fediverse-Projekt ist, das "feature-complete" ist, und das ist eine Tatsache.
Ich htte ihm am liebsten eine ganze Liste von Hubzilla-Features vor den Latz geknallt, die teilweise auch noch unter Beweis gestellt und ihn gefragt, ob Mastodon das auch hat.
Aber die Leute "lernen" ja im allgemeinen im Zuge ihres Fediverse-Onboarding erst, da es nur Mastodon gibt, dann, da das Fediverse nur Mastodon ist, und dann, da Eugen Rochko das Fediverse und ActivityPub erfunden hat, Mastodon zuerst da war, Mastodon der Fediverse-Goldstandard ist und alles, was nicht Mastodon ist, nichts weiter als eine Alternativ-UI fr Mastodon ist.
Aber auch bei einer Kommunikation mit dem Aufruf fr Unis ins "Mastodon" und dann die Weigerung, dafr Fediverse einzusetzen.

La mich raten.
"'Mastodon' und 'Fediverse' ist doch genau dasselbe. Das meint beides dasselbe. Da gibt's keinen Unterschied."
"Die Leute und die Unis kennen weit eher Mastodon als das Fediverse. Mit dem Fediverse braucht man ihnen nicht kommen, das verstehen die nicht. Mastodon kennen die eher und verstehen die eher."
"Das darf fr die Unis nicht zu kompliziert sein. Es ist einfacher, sie direkt nach Mastodon einzuladen, als ihnen erst das ganze Fediverse zu erklren." Als wenn es an jeder Uni nur technikferne Dumpfbolzen gbe.
"Die Unis brauchen ja eh nur einen Ersatz fr Twitter." Und Akkoma, Firefish, Iceshrimp etc. sind kein Twitter-Ersatz Ach so, habt ihr nie von gehrt. Und die Unis kommen auch ganz bestimmt mit maximal 500 Zeichen aus Und mit der totalen Verweigerung der obersten Mastodon-Heeresleitung, HTML-Rendering zu untersttzen, weil das ja nicht Microblogging wre
Auerdem glaube ich, wenn eine TU oder TH erst einen eigenen Hubzilla-Hub aufgesetzt hat, geht denen einer ab angesichts der Mglichkeiten, die sie da aus einer Hand und an einer Stelle haben. Womglich kriegt Mario dann sogar PRs von Professoren oder Studis rein zwecks Verbesserung.
"Erstmal Mastodon, dann sehen wir weiter." Nee, dann sehen wir nicht weiter. Die meisten Privatanwender, die auf Mastodon landen, kommen da innerhalb des Fediverse kaum wieder weg, weil es keine Ein-Klick-Migration nach wohin-auch-immer fr Doofe innerhalb der eigenen Lieblings-Mastodon-App gibt. Und Institutionen wie Unis mit eigener Instanz sind noch sehr viel unflexibler. Selbst wenn die woandershin umziehen wollten, wre das ein Proze von Monaten oder Jahren, bis das alles durch wre.
Wie die dann mastodon mit Flipboard, Wordpress udn spter mal threads in einklang bringen wollen, ist mir schleierhaft.

Gar nicht.
Das bloe Konzept, die bloe Idee, da irgendwas anderes, was nicht Mastodon ist, mit Mastodon verbunden sein knnte, wird ihnen vollkommen fremd sein. Die Unis werden so onboarded wie die vielen privaten Twitter-Flchtlinge: Es gibt nur Mastodon, und Mastodon ist innen drin dezentral, aber nach auen ein eigener Walled Garden. Genau das Bild, das Rochko und die gGmbH stndig zu vermitteln versuchen.
Ich glaube, das drftest du selbst schon oft genug erlebt haben, wie Mastodon-Nutzer Ziegelsteine in ihre Hosen gekackt haben, als sie durch dich unerwarteterweise und auf die harte Tour das komplett Unvorstellbare erfahren haben: Das Fediverse ist nicht nur Mastodon. Mastodon ist auch noch mit anderen Sachen verbunden. Und ja, das ist normal und soll so.
Nur knnen einzelne Privatleute in Sekundenschnelle auf diese Erkenntnis reagieren. An einer Uni mu die Erkenntnis selbst erst ber verschiedene Leute und durch diverse Abteilungen und Gremien und so gehen, bis das auch wirklich alle wissen, die damit irgendwie zu tun haben. Das alleine kann Wochen dauern, wenn nicht Monate. Eine Reaktion darauf kann noch lnger dauern.
Und wenn dann einer von auerhalb von Mastodon mit der Uni-Instanz zu interagieren versucht, und das ist nicht exakt genauso, wie wenn jemand anders auf Mastodon das tut, dann starren doch die Verantwortlichen darauf wie die Rehe ins Fernlicht und wissen weder, was da gerade passiert, noch was sie tun sollen.
Es wird ja schon schwierig genug, Uni-Instanzen aktuell zu halten. Vielleicht nicht an einer TU mit notgedrungenermaen fhiger IT, weil es peinlich wre, wenn die eigenen Informatik-Ersties mehr Plan haben als die hauseigene IT. Aber berall sonst besteht das Einmal-installieren-und-nie-updaten-Risiko.
#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #Mastodon #Unis4Mastodon #UnisInsFediverse #NichtNurMastodon
Well... I don't think anyone's necessarily expecting perfection. They just expect a reasonable and deliberate effort.

They may not expect perfection all over. But they expect full accessibility for themselves.
There are some guidelines out there from WGBH and other sources that can help decide what is and isn't necessary to have a well-described image.

Trust me, I know these guidelines. I follow them to a tee as far as that's possible for me. They have influenced me greatly and still do.
Just recently, I've learned two new things from such a guide. One, dimensions must always be given as relative to a size that people, especially blind or visually-impaired people, are familiar with, e.g. human body height. Two, colours must always be described based on a specific selection of basic colours plus brightness/darkness plus saturation. Blind people have no concept of "cerise" or "crimson" or "mocha". These two points rendered all my previous image descriptions obsolete.
But my images are extreme edge-cases that put these guidelines and the general image accessibility rules to the test.
Let me give you a few examples.
First of all, in order to keep an image description, especially in alt-text, short and concise, the guides tell me to focus on what's important in the image in the context of the post, describe that and sweep everything else under the rug because it's unimportant.
This advice completely failed because there are several dozen singular elements in the image that are important within the context of the post. I did describe everything in the images, but only a tiny fraction of the 40,000-character description of the first image is about the surroundings.
It failed even more and especially . In these cases, I wanted to show the whole scenery with everything in it. Nothing was more or less important in the images within the context of the post than anything else. Thus, I had to describe everything because, in a sense, everything was important.
It also failed . Again, I wanted to show the whole scenery. But even if I had wanted to show only the central element of the image, the building, I would not have been able to shrink the image description to a sensible size. The building is so complex and so strange and unusual at the same time that a highly detailed and very long description of it was absolutely necessary and inevitable. It's for this reason why I normally try to avoid having buildings in my images.
Another advice is to mind the target audience and write for them. This advice always comes in guidelines for alt-texts on static Web sites or blogs. It never comes in guidelines for alt-texts in social media.
Static Web sites or blogs have very defined target audiences, namely those who actively seek out the kind of content that is on these sites and comes from these blogs. Especially if they're about science, technology or the like, the Web designer or blogger can always count on the target audience being interested in the field and, thus, coming with substantial prior knowledge. Certain things simply don't have to be explained to this target audience because they already know them.
The target audience on social media, on the other hand, can be anyone. In the Fediverse, my target audience includes people who happen upon my posts on the federated timelines of their instances.
People who have one of my image posts boosted/reposted/renoted/repeated to them for whatever reason.
People who discover one of my image posts on their local timeline because someone on their instance has boosted it.
People who come across one of my image posts among the posts of someone whose profile they're checking out, but that user has only just boosted one of my image posts. Granted, I don't think anyone has ever boosted any one of my image posts because they're too intimidating for Mastodon, especially not , but it could always happen.
And, of course, people who happen to follow one of the hashtags I use, and that hashtag is not one about the topic (e.g. #zrl=https://hub.netzgemeinde.eu/searchtag=OpenSimOpenSim/zrl), but it happens to be one that I use as a filter trigger (e.g. #zrl=https://hub.netzgemeinde.eu/searchtag=LongPostLongPost/zrl). Or they're interested in all things alt-text, so they follow #zrl=https://hub.netzgemeinde.eu/searchtag=AltTextAltText/zrl, but I put that hashtag on my image posts as well.
Now, my image posts are about a general topic with which maybe one in over 200,000 Fedizens is familiar. The chances that someone who happens upon one of my image posts is familiar with the general topic are practically nil.
Thus, taking my target audience into account, I also have to take into account that at least parts of my target audience know nothing about what my image posts are about. Absolutely zilch. Not even the basics.
For starters, this means I have to explain everything from the ground up. Look at the image posts I've linked to again. Open the content warnings again. Check the beginnings of the full image descriptions. Notice how much I have to ramble in order to only explain where the image was taken.
In real-life pictures, you'll probably never need more than 70 characters to name even the most obscure real-life location. In fact, you'll often get away with name-dropping because most people are familiar with the place or even a specific landmark anyway.
In my virtual world pictures, I can't get away with name-dropping. I have to go all the way and explain on which sim a landmark is, in which grid the sim is, that the grid is a virtual world based on OpenSim, what OpenSim is, what Second Life is, what grids are, what this particular grid is, why they're named grids, what regions are, what sims are, even what the Hypergrid is in order to make clear how my avatar got to that particular location in the first place. None of this can be expected to be known by everyone in the target audience.
Mentioning the place where I've created a picture often requires more characters than fit into Mastodon's alt text.
Likewise, most people should at least have a rough idea what the Capitol is and what it looks like. Or the Eiffel Tower. Or the Taj Mahal. Or a real-life cat, apart from the ways that cats look different from one another.
How many people, do you think, know right off the bat what the Sendalonde Community Library is and what it looks like Would you know without Googling
Or how many people, do you think, know right off the bat what an OpenSimWorld beacon is and what it looks like You think I could get away with name-dropping "OpenSimWorld beacon" in an image description It doesn't help that there are at least five standard variants from various versions, and some sim users customise them or build their own.
Or if I just simply wrote that there's a teleporter somewhere, how many people, do you think, would be sufficiently informed How many non-sighted people would be sufficiently informed
I could post a picture that includes a virtual version of the Eiffel Tower or the Golden Gate Bridge. Really, I could, I know where to find them. Blind or visually-impaired people may have a rough idea what they look like in the real world. People with bad mobile Internet for whom my image doesn't load may know exactly what these structures look like in the real world. But how are they supposed to know what the virtual models of these structures look like And how is anyone supposed to know where they are located
So, yes, I do take into account that parts of my target audience may not be fully sighted. That's why I'm doing all that in the first place. That's why I describe stuff that many people may deem unimportant.
See, I'm posting about 3-D virtual worlds. For many people, that's completely uninteresting. And I guess not few actually filter the word "metaverse" out for whatever reason.
But there may be people who have actually put high hopes into virtual worlds without knowing that virtual worlds already existed at that point. Then they heard about "the metaverse" dwindling away. And they were disappointed.
And then they come across one of my image posts. About a 3-D virtual world. And they're immediately on the edges of their seats in utter excitement. There are actually existing virtual worlds Like, right now That picture in that post is not an AI rendering, but it was made in an actual, real, living 3-D virtual world!
If they read on, they'll discover on top of all that that the virtual world which my image shows is free and open-source. And decentralised. You know, like Mastodon. To the point where anyone can run their own instance. Or rather, grid. And connect it to all the other grids.
This is not what they've dreamed of. This exceeds what they've dreamed of. They would never even have dared to dream of something like this because it was so very unimaginable. And now it turns out it's very real. Oh, and it has been for 17 years already.
Now they're curious like you wouldn't believe. They want to know everything about these worlds. They want to soak up any information about them they can get like a sponge. They want to explore these new and unknown worlds.
If they're sighted, they start by letting their eyes wander around the image. No matter what the image focuses on within the context of the post, they check out everything, also because all that stuff looks much, much more realistic and much, much less cartoonish than those official previews from Meta's Horizons which all of a sudden look even more ridiculous and pathetic in comparison.
I could post a virtual cat picture. No, really, I could. They wouldn't just look at the cat and go, "Aww, how cute." They would check out the surroundings and the background just as well, even though the surroundings and the background don't really matter. But to them, they do matter because it's such an exciting new world.
Now, if these people were blind or visually-impaired, one might argue that they wouldn't be nearly as interested in such a highly visual medium as a 3-D virtual world. I wouldn't count on that. Blind or visually-impaired people might be just as curious about these new worlds as sighted people and just as eager to go explore these new worlds. They might be just as eager to know what everything in any of my pictures looks like, regardless of context. But they need my help.
In general, I don't buy the notion that blind or visually-impaired people can easily be satisfied by mentioning stuff that's in an image, that you can get away with mentioning and name-dropping and nothing more.
I always expect blind or visually-impaired people to be all like, "Yeah, that's fine, but what does it look like"
The more curious they are about something, the more eager they are to know what it looks like. And the newer and/or more obscure something is to them, the less likely they are to already know what it looks like. And the virtual worlds I write about are completely unknown to them at this point.
All this ties in with the next point. And that's to avoid technical terminology and jargon. If you can't, you have to explain it in such a way that even the most casual audience can fully grasp it with no problems and without having to look up anything themselves. Anything less is ableist.
I can't fully avoid it. If I tried, I would have to constantly write around it in such a way that my image descriptions would get even more confusing.
So I have to explain it. All of it. Down to the very basics. Down to something that I can expect everyone in the Fediverse to know and be familiar with.
Seeing as how very niche and obscure and special-interest my virtual worlds topic is, and how many unfamiliar elements will inevitably appear in both my images and their descriptions, I have a whole lot to explain. Even explaining Second Life would be easier than explaining OpenSim because explaining OpenSim requires a previous explanation of Second Life.
Anyone who believes that I can cover all that in significantly under 500 characters in a way that absolutely everyone can understand right away with no special prior knowledge, without Googling, without asking me questions afterwards, I have a bridge to sell you.
Then there is how people should be described. I only post photographs of real-life people if they're part of a known, established meme template.
Otherwise, virtual avatars take their place. I have started to avoid posting avatars after I had learned about the eye contact trigger. The issue here is two-fold: Mastodon doesn't hide images behind content warnings. And Hubzilla can't make Mastodon blank out sensitive images. So any sensitive image which I post on Hubzilla will always end up immediately in plain sight for everyone, at least for those who don't have filters that catch these image posts.
When I couldn't really avoid having an avatar in an image, I showed the avatar from behind so that the face was completely invisible. It still substantially increased the effort of description and the length of the description, but it was mostly safe from triggering people, and I didn't have to go through the effort of describing the avatar's face and mimics.
Now I'm also on (streams) which can make Mastodon blank out sensitive images using one out of two specific hashtags. So there's no stopping me posting pictures with avatars in them and even showing their faces.
Describing avatars can end up tedious, though, much more tedious than describing comparable real-life humans, applying the same description rules.
First of all, there's no skimping. There's no "it doesn't matter what the avatar looks like". It always matters to someone what the avatar looks like. Again, exciting new world to explore and all. See above. And if the image is a portrait of the avatar, it matters very much what the avatar looks like.
Next, there's much less that doesn't have to be explained because it's a given. For example, there's quite a bit of debate on whether or not to mention a real-life human's skin tone, and if so, how. The general consensus is usually to limit it to "light, medium light, medium, medium dark, dark". Nothing much more is necessary except maybe freckles or something. But a real-life human skin always looks like a real-life human skin. That's a given, isn't it
In a virtual world, it isn't a given. In case you were unaware: The skin of a human avatar in a virtual world can be a plain, solid tint on a 3-D mesh. It can be photo-realistic. It can be one or multiple actual photographs, e.g. because the whole avatar was generated from photographs and a 3-D laser scan of a real-life human. And it can be anything in-between.
Look up early promo pictures showing Mark Zuckerberg's personal Horizons avatar. Then search Flickr for Second Life pictures. There are worlds between these two. You may be tempted that the Second Life pictures were generated by Stable Diffusion rather than rendered in a virtual world.
So if I just mentioned the skin tone, how would you, without looking at the image, be able to know what exactly the skin looks like, whether it's a cartoonish solid colour or a set of photo-realistic textures
This goes on and on.
In fact, when I describe an avatar, I also have to explain the avatar. With that, I mean I have to explain the avatar system in Second Life and OpenSim and how it works. Unlike in many other virtual worlds, they aren't single-piece "monoblock avatars" la Ready Player Me that can only be varied in overall size, if at all. They are highly customisable in-world with no external tools, and they are highly modular. This explains why avatars looks like what they look like. And seriously, only experienced Second Life and/or OpenSim users even expect this level of in-world customisability and modularity to be possible in the first place.
If I were to go all the way, I'd even explain what mesh is, what rigged and fitted mesh is, how it works, how it differs from the "classic" means of customising Second Life and OpenSim avatars, what Bakes-on-Mesh is and what advantages Bakes-on-Mesh has over what was used before it was there.
In , I omitted the latter, and I also forgot to mention that the skin textures are near-photo-realistic, and the skirt texture is photo-realistic. Maybe I'll edit the image description preamble in the post and add both.
But what increased the effort of describing the avatar are the OSgrid logo and the logo for OSgrid's 17th birthday. Nobody who isn't an active OpenSim user knows what either looks like without seeing it. So I had to describe both in detail, also because the latter includes three independent lines of text which I had to transcribe.
This brings me to the last point: text in images and its transcription.
There is the rule that says that any and all text within the borders of an image must be transcribed 100% verbatim.
In the Fediverse, this rule is usually applied to screenshots, Mastodon screenshots, pictures of newspaper clippings or photographs of posters. This is usually manageable except when the text in a newspaper clipping is over 1,500 characters long.
The various alt-text guides out there that mention text transcripts are never about social media, what with how the puny character limits for alt-text in commercial social media makes transcribing text difficult to nigh-impossible.
They're always about static Web sites and blogs. This mostly means commercial Web sites, professional scientific Web sites, professional scientific news blogs or professional Websites or blogs about technology.
So some of the guides know the occasional edge-case. A graph or a flowchart is a near-edge-case due to its complexity. There are specific defined rules on how to describe graphs, although they may differ. One solution is to fully describe a flowchart in the main text. As for graphs, it's often considered okay to sweep the exact details under the rug unless every last figure in the graph absolutely matters. A complex table is very much an edge-case because a full transcript of it in alt-text would be too long. It is not as clear how exactly a table has to be handled, but it is being discussed.
Most importantly, both of these edge-cases are very well-known. They happen to professional Web designers and bloggers over and over and over again, and they have been happening for many, many years.
My images are edge-cases, too. But they're completely unknown edge-cases, and they're edge-cases in ways that are completely unknown to all those who write alt-text guides and who work on standardising accessibility.
I often have lots of bits of text in them in various places and ways. Signs, posters, box labels, vendor labels, teleporter labels, flags, neon signs, prints on clothing, the writing on an OpenSimWorld beacon, what-have-you. All this could be in my images. And a lot of it has already been in my images, sometimes in great numbers.
This alone is an edge-case. An edge-case which none of the alt-text guides handle because none of them are even aware of its very existence. The same situation could occur in real-life photographs, but real-life photographs practically always focus on one specific element or a few, and everything else is recommended to be treated as non-existent in alt-text. So part of my edge-case is that this focus on one specific text-less element is simply not there in most of my images.
I can solve it gracefully by actually transcribing absolutely everything verbatim, 1:1, 100%. It usually doesn't go into the alt-text anyway unless it isn't too much. It definitely always goes into the long, full, detailed description in the post, in case you've forgotten that I usually describe each one of my images twice. Only if it's a maximum of three or four short bits of text, and I don't have to describe too much in the alt-text otherwise, only then the transcripts go into the alt-text as well. But I never put some transcripts into the alt-text and some not. Either all of them or none. Regardless, all of them always go into the full description in the post.
What makes my unhandled edge-case even more of an unhandled edge-case is that text often tends to be illegible in my images.
It may be too small to be readable at the resolution at which I post the image, e.g. two or three pixels high. It may be so small that it can't be identified as text at that resolution. It may be so tiny that it's practically invisible, e.g. less than half a pixel high. But still, it's text, and technically, it's within the borders of the image. Or it could be partially obscured, e.g. a poster with a lot of text of it, but 30% of the front area of the poster are covered by the trunk of a tree in front of it.
No alt-text or image description guide that deals with text transcripts ever deals with unreadable text. They don't declare that it must not be transcribed. They don't declare that it needn't be transcribed. They don't declare explicitly that it must be transcribed.
The very existence of illegible text in an image completely eludes absolutely everyone who writes guidelines for image descriptions.
Given the lack of special treatment requirements of illegible text, I have to suppose that illegible text has to be handled the exact same ways as legible text. It has to be transcribed verbatim. And so I transcribe it verbatim.
Now you may say that that shouldn't even be possible. If that text is illegible, I can't read it either, so how should I be able to transcribe it
But I can read it.
See, when I describe an image, I don't just simply look at the image at the resolution at which I intend to post it. I look at the real deal in-world. I don't look at the representation of a sign in my image. I look at the sign proper, in the virtual world, right where the sign is installed.
I can walk around with my avatar. I can move the camera independently from the avatar. I can move around obstacles such as tree trunks. I can close up on everything.
And all of a sudden, that tiny white blot of 4x3 pixels on which the writing can't even be made out as such unfolds before my eyes as a sign with a front texture of 1024x768 pixels with half a dozen clearly readable lines of text.
And all of a sudden, I can read that text behind that tree.
In fact, I'd say that even fully sighted people can profit from how I transcribe everything, legible or not. If even the most eagle-eyed Fediverse users can't read something in one of my images, they still know what's written there because I can read it, and I tell them what's written there.
Lastly, all this is just me following existing guidelines and rules, from explaining jargon and unknown words to describing people to transcribing text to taking my audience into account and giving them what they require. It's just that my images are extremely obscure edge-cases for which no officially defined and agreed-upon exceptions exist and probably never will.
#Long #LongPost #CWLong #CWLongPost #OpenSim #OpenSimulator #Metaverse #VirtualWorlds #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #Transcript #Transcripts #Transcription #Inclusion #Inclusivity #A11y #Accessibility #AccessibilityMatters

Walz has a long history with China. But hes notpro-China.

Vice President Harriss running mate taught in China in 1989, sparking an enduring interest in the country but also leading him to criticize its human rights abuses.

The more complex and/or obscure an image is, and the more description and explanation it requires for casuals to understand it, the harder it is to describe and explain it in a way that makes the image accessible to absolutely everyone. But exactly this should always be the goal: make your images accessible to absolutely everyone with no exception.
At some point, you will inevitably have to throw someone in front of the bus to cater to someone else.
It's impossible to describe the kind of images I post in a way that's short enough for absolutely all blind or visually-impaired users to comfortably have the sole remaining image description read to them by their screen readers and informative and explanatory enough for casuals and especially neurodiverse people to fully understand it. The former will require one image description in the alt-text that's no longer than 200 characters plus no additional image description in the post. The latter will require everything, absolutely everything to be explained and described from the absolute basics up to details like single elements in the image which means tens of thousands of characters, which in turn will have to go into the post. I guess you can see the contradiction.
If the image that you want to post shows something sufficiently niche and obscure, because at least one group of disabled people will inevitably have to suffer from at least one image description that's completely useless and totally inconvenient to them.
#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #Inclusion #Inclusivity #A11y #Accessibility #AccessibilityMattersDanie van der Merwe
I get who your target audience may be, but the alt-text is really intended for the sight impaired audience, not for visually fine readers

Yes, but I didn't want to taint my track record of having described all my images since summer 2023 by not describing these two, regardless of whether or not descriptions were actually necessary for my specific target audience. It's bad enough that I didn't have the time to apply my own high standards for image descriptions to these two images.
That means any sight impaired user who follows you or follows a hashtag you choose to use, will in deed be seeing and trying to read your posts

My posts, yes, unless I limit who receives and therefore is permitted to read them.
Comments on Hubzilla should be expected to work differently from replies on Mastodon. They should only go to the author of the post at the beginning of the thread and be spread to all those who have a concept of conversations, and who have the post in their timelines/streams. In addition, they should go to those whom they mention and have them notified.
Curiously, at least when the post is fully public, my comments seem to go out to almost all my connections. The only exceptions are the Threadiverse unless I comment on something in the Threadiverse, those who filter out something in the comments and those on at least Hubzilla who haven't received the post and all comments in the branch before my comment, if any.
Unfortunately, on Hubzilla, whoever sends the post has the exclusive right to define the audience of the entire thread. I can't restrict the audience of a comment to a post to fewer actors than the audience of the post itself.
So in my case I try keep my alt-text descriptions shorter and more concise, specifically for screenreaders.

For my own images, I always feel that image descriptions in not significantly more than 200 characters wouldn't even start to actually describe anything. They'd stop at what the image is about. But they would barely, if at all, get to mentioning at least the most important elements that are in the image. And they certainly wouldn't get to describing what any of them look like. And I'm still not talking about explanations, which shouldn't go into alt-text anyway, and text transcripts.
I feel that even with the additional, full, long, detailed image description in the post as a "backup", such a short description in the alt-text would be so insufficient that they'd be half-useless and border on being ableist.
If you want to know what my image posts usually look like: with exactly 1,500 characters of alt-text, a bit over 1,400 of which are image description, plus over 60,000 characters of image description plus explanations plus text transcripts in the post. Unfortunately, my standards have advanced since then, and it is outdated now. But it's still probably the image post with the longest image description in the whole Fediverse.
I could also link to my most recent image post. It's more up-to-date, i.e. it avoids two mistakes I've made in the post that I've linked to. But it's pretty non-standard: It has two images which I mostly describe together in the post in what's essentially one very long common preamble of almost 11,000 characters, followed by individual descriptions for the images of a bit over 2,800 and a bit under 6,600 characters. This is because I was done describing the first image when I decided to add a second image which has a lot in common with the first one. The alt-texts have 1,500 and 1,499 characters respectively, but they use more characters to explain where the long descriptions can be found.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Mastodon #Hubzilla #Streams #(streams) #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #Inclusion #Inclusivity #A11y #Accessibility #AccessibilityMatters Ich dachte, den Mastodonten gehen die "Reply Guys" von Friendica jetzt schon auf den Zeiger, die ungefragt in ihre Threads reinplatzen und sie darber informieren, da "Mastodon" und "Fediverse" brigens nicht deckungsgleich sind, es auch noch ganz andere Sachen im Fediverse gibt und so weiter und so fort.
Aber anders kann man ihnen das ja nicht erklren. Ich schtze den Eigenanteil von Mastodon-Kommunikation innerhalb von Mastodon auf ber 95%. Wenn du nicht auf Mastodon bist, hast du auf Mastodon kaum Reichweite.
Zumindest haben wir aktuell das Glck, da das Fediverse kaum wchst oder sogar schrumpft. Das heit, da knnte man endlich mal aufholen und wrde nicht gleich von der nchsten Fluchtwelle wieder abgehngt, die mal wieder praktisch ausschlielich direkt nach mastodon.social rollt.
Ich kriege davon ja nicht viel mit, weil ich von den meisten meiner Mastodon-Kontakte keine Posts reinlasse, hchstens Kommentare und DMs. Allenfalls knnte ich das gute alte Hashtagsuche-auf-mastodon.social-Spiel spielen. Ich knnte auch Hashtags als Feeds abonnieren, aber die mllen mir dann wieder den Stream zu.
CC:
#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #NichtNurMastodon

Le -mtrage tunisien L do lon vient (M el An) de Meryam figure sur la liste des quatre confirms dvoile ce jeudi 8 aot par la 39me dition du international du film ( ) de qui se droulera du 27 septembre au 4 octobre 2024 Namur ( ).

I have no other choice.
First of all, each one of the images contains three images in turn, all three of which must be described.
Furthermore, all text in an image must be transcribed verbatim. No exception. Each one of the images contains 18 individual pieces of text. The first image alone has over 600 characters to transcribe:
The second image has the same fields, but more characters because some of the IDs are longer.
And this time, all text is actually readable. This is not another case of countless microscopically tiny pieces of text of which you wouldn't even know where they are in the image unless you paid close attention to the description.
The target audience of my post should not go far beyond:
These five will have my comment automatically delivered to them because they have received the post on their streams, and thus, they will be notified by all other interactions with the post and the comment thread. And I can guarantee you that all five are sighted.
Nobody on Mastodon will be automatically notified about my comment. Mastodon does not fetch replies to posts as unread content. And when I comment on a post, this comment is not delivered to all my followers. This is how Hubzilla works, in stark contrast to Mastodon.
This also means that the alt-texts are exclusively geared towards Hubzilla and (streams). They are too long for Mastodon, Misskey and their respective forks. Mastodon will truncate the first alt-text in the middle of a word ("...a dark and slightly bluish grey nec") because that's the 1,500-character mark.
Of course, I can take into account the millions of Mastodon users who might find the post or one of the comments somewhere, then open the whole thread, then click on the start post, then go through actually the whole thread, provided all of it was forwarded to their Mastodon instance, and then discover my comment with two screenshots.
Have you ever encountered a single one of my image posts Especially in the past six months or so
Because if I took all these people into consideration as my target audience, I would have to take a similar effort as for my usual pictures which would result in even more describing characters and four separate image descriptions plus one preamble for two of them.
I would have to describe each image twice with a long, detailed, explanatory description in the post itself and with a shorter, probably purely visual description with no explanations and no text transcripts in the alt-text.
After all, if I included any possible Mastodon user into my target audience, I would have to take into account what they know and what they don't. And they know a whole lot less about these images than the five gentlemen who are my actual target audience right now. One out of four Mastodon users has never heard of Hubzilla, and my estimation is that every other Mastodon user thinks the Fediverse is only Mastodon.
I would start with the descriptions in the post. And I would start these with a common preamble for both descriptions.
I would explain what the site from which I'm posting screenshots is about.
I would explain that, no, the Fediverse is not only Mastodon because I can't assume it to be known.
I would explain what Hubzilla is because I can't assume it to be known.
I would explain what Friendica is  because I can't assume it to be known, and because knowing what Friendica is is required in order to understand the explanation of Hubzilla.
I would explain what a software fork is because I can't assume it to be known, and because knowing what a software fork is is required in order to understand both the explanations of Friendica and Hubzilla.
I would explain what the streams repository is because I can't assume it to be known.
I would explain what a software repository is because I can't assume it to be known, and because knowing what a software repository is is required in order to understand the explanation of the streams repository.
In fact, I would probably have to explain what source code is and what certain software licenses are because I can't assume either to be known, but it's necessary to understand software repositories and the streams repository.
I would explain what single sign-on is because I can't assume it to be known.
I would explain what OpenWebAuth is because I can't assume it to be known.
I would explain what Zap was because I can't assume it to be known, and because that's a requirement in order to understand the history of OpenWebAuth.
I would mention early on that both images contain two almost identical variants of my profile picture.
In order for people to understand my profile picture, I would explain what OpenSimulator is because I can't assume it to be known.
In order for people to understand OpenSimulator, I would explain what Second Life is because I can't assume it to be known.
In order for people to further understand the description of my profile picture, which isn't necessary in reality because I don't mention where exactly I have taken it, but which would be necessary then because someone might want to know where exactly the image is from, I have to explain what the Wolf Territories Grid is because I can't assume it to be known.
In order for people to understand the explanation of the Wolf Territories Grid, I would have to explain what grids are and why Second Life's world and the worlds based on OpenSim are referred to as "grids" because I can't assume it to be known.
In order for people to understand that, I would have to explain regions and sims in the Second Life and OpenSim context because I can't assume it to be known.
The preamble alone might be longer than my longest image description so far, and my longest image description so far is over 60,000 characters long.
Then I would describe the first image in the post.
Then I would describe the second image in the post, assuming the description of the first image to already be known.
Then I would write a separate, shorter, purely visual image description for the alt-text of the first image. I would not assume any of the other image descriptions to be known and write it as a stand-alone description. The alt-text would also mention that there is a longer, more detailed, explanatory image description with text transcripts in the post, that the users of Mastodon, Misskey etc. etc. can find it hidden behind the content warning which reads, "<Insert full quote of the summary with all content warnings here>", and that the users of Pleroma, Akkoma, Friendica, Hubzilla and (streams) can find it in the post they are reading right now, right after the second image.
Then I would write a separate, shorter, purely visual image description for the alt-text of the second image. I would not assume any of the other image descriptions to be known and write it as a stand-alone description. The alt-text would also mention that there is a longer, more detailed, explanatory image description with text transcripts in the post, that the users of Mastodon, Misskey etc. etc. can find it hidden behind the content warning which reads, "<Insert full quote of the summary with all content warnings here>", and that the users of Pleroma, Akkoma, Friendica, Hubzilla and (streams) can find it in the post they are reading right now, right after this image.
I wouldn't be done before Monday.
And I still wouldn't be able to satisfy everyone. Some would demand my alt-texts to be no longer than 200 characters each while still being sufficiently descriptive. Others would dislike the total lack of text transcripts in the alt-text, regardless of the text transcripts plus necessary image description elements would have inflated the alt-texts to lengths that Mastodon could impossibly hold. And then there would be those who are enraged because I have the audacity to put over 500 characters into one post, something that should be forbidden Fediverse-wide.
The only choice I have not to throw anyone in front of the bus is by not posting these images at all.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Mastodon #Hubzilla #Streams #(streams) #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #Inclusion #Inclusivity #A11y #Accessibility #AccessibilityMattersI'm keeping my word. I describe every single last image I publish anywhere. Even when I'm quite certain that the image in question will only be encountered by a small handful of users on Hubzilla and (streams), none of whom needs image descriptions. Just like this time: I'm describing two images in a comment on a Hubzilla post which will not be automatically forwarded to anyone on Mastodon. Mastodon users will only come across that comment if they check the whole thread.
The exception I'm making this time is that I'll only describe the image once. In the alt-text. With no explanations. The target audience doesn't need to be explained what Hubzilla is, what (streams) is, what OpenWebAuth is, what single sign-on is, and they probably don't care enough about what OpenSim is to expect an explanation either. In fact, neither Hubzilla nor (streams) has alt-text and image descriptions as part of their culture.
So there will only be the image descriptions in the alt-texts and not a set of full, detailed image descriptions in the post plus one image description shortened to under 1,500 characters in the alt-text of each image.
Still, although the image is a screenshot and far from being the most complex screenshot I've ever described, the image description will be several thousand characters long. That's halfway okay because Hubzilla and (streams) have no character limits for alt-text either, apart from how much the Web interface can show at once on any given output device. But even if the Web interface can't show an alt-text to its full extent, a screen reader should still be able to read it all. And sighted Hubzilla and (streams) users can take a look at the source code of my comment.
Part of the length comes from describing my profile picture in each one of the screenshots, but there are also lots of pieces of text to describe and transcribe. To keep it "brief" this time, I won't mention and describe the typefaces, though.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Hubzilla #Streams #(streams) #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #CharacterLimit #CharacterLimits

Dolphins in Depth: Cause for concern on offense surfaces against Falcons

NorVegan
Just to make sure I understand correctly, embedded inline images are not remeved all together, but rather moved to the bottom of the post ...but only 4 images, the rest is actually removed. Did I get that right

What actually happens is that Mastodon's "code sanitiser" strips everything out that Eugen Rochko doesn't deem old-school, original-gangsta, Twitter-like microblogging. Except a little text formatting since Mastodon 4. But this includes in-line images which Mastodon removes.
In order for images in posts to make it to Mastodon in some way, Friendica, Hubzilla and (streams) make copies of inline images as file attachments because Mastodon does not strip file attachments away. File attachments are how Mastodon does images.
But even then, Mastodon only supports a maximum of four file attachments, even in incoming posts. Everything that goes beyond that is discarded entirely.
I've set my channel up in such a way that I have to confirm all new connections.

How do you do this

It's a default setting in Hubzilla. It can be found in the privacy settings. Or rather, the opposite can, namely that anyone can follow your channel just like so, without your explicit consent.
So if you manually confirm new connections, it'll appear to them like you follow them back, but if they're confirmed automatically it will not appear that way

It doesn't just appear so.
If someone tries to follow me, all that happens is that I receive a notification. But they don't actually follow me. They aren't permitted anything, not even to receive my posts.
In order for them to actually follow me, I have to confirm their follow request. But in doing so, I automatically follow them back.
Hubzilla doesn't do that Twitter/Mastodon shtick of having followers on the one hand and followed on the other hand, and these are two different things, and if you follow one of your followers, congrats, you're mutuals.
Normally, Hubzilla works like Facebook. You're connected bidirectionally, mutually, or you aren't connected at all. Only that Facebook calls it "friend", and Hubzilla calls it "contact" or "connection".
Okay, but there's still that switch that I've mentioned above. It allows others to follow you without your consent. I've never tested this switch. None of my Hubzilla channels is experimental enough for me to try it. But thinking about it, I guess it should establish a bidirectional connection, too. Again, I don't know for sure.
This switch does not make much sense for a personal channel. It's much more useful for a public, anyone-can-join forum channel.
Now comes the extra trick that I've pulled, the thing that I mean with "I don't actually follow you back".
Most of my connections are Mastodon users who have absolutely nothing interesting to say. Not within the scope of my channel (primarily OpenSim, secondarily the Fediverse beyond Mastodon).
Some followed me because they took me for a Fediverse guru and nothing else because they had judged me by that one single post or comment they discovered from me. Others, especially from 2023 on, followed me because they found me on the federated timeline on mastodon.social, and they followed everyone whom they found there so they had their timeline abuzz with uninteresting cruft just like on Twitter.
If I actually followed them back like you'd expect me to follow them back, they'd completely and utterly soak my stream with what I consider totally uninteresting cruft, completely drowning out the interesting posts, even more so if I even let their boosts through.
On the one hand, I want to let them follow me. On the other hand, I don't want them to drown me in all that off-topic rubbish.
I tried filtering when the channel was new. All of a sudden, my stream was nicely quiet. Problem, however: Hubzilla can't apply a per-connection whitelist filter including a keyword list only on posts and neither on comments nor on DMs. At least once, I haven't received a DM because the filter blocked it, because it didn't contain any of the keywords that were required for any content to come through.
So what I did was:
  1. I edited all my contact roles and explicitly allowed sending me posts in them.
  2. I made two new contact roles that don't allow sending me posts.
  3. I assigned these new contact roles to all contacts that I didn't expect to post anything interesting.
  4. Lastly, I edited my channel role and turned off the permission to send me posts.

Step 1 ensured that everyone who had one of my old channel roles assigned was still allowed to send me posts even after step 4. Step 2 and 3 made sure all those whom I wanted silenced were silenced when step 4 happened, and those were the majority of my connections.
Again, my stream was not nearly as busy as it would have been if everyone was permitted to send me their posts.
To reduce the cruft even further, I applied boost filters to those whose posts I allow, but who boost a lot of rubbish. And I also try to filter out specific uninteresting things from very busy connections which, at the same time, are unlikely to send me DMs by blacklisting certain keywords.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Hubzilla #Filters #Permissions This is close to what I already do.
In fact, when I have an idea for a picture, I go through the image description in my head, then I look at the scenery, then I go through the image description in my head again with the scenery before my eyes. And when I hit obstacles while trying to describe the scenery in my head, I decide against both creating the image and writing the image post in the first place.
This is why I usually try my best to avoid e.g. having buildings in my images, especially, but not only realistic buildings. Having to describe them is actually worse than having to describe space nebulae, and I have described space nebulae before. I avoid space pictures in my images now as well.
For me, it isn't about having some alt-text for an image because any alt-text is allegedly better than none. It's about having a pair of descriptions for each image that are as informative, accurate and thus useful as I can possibly write them.
#Long #LongPost #AltText #AltTextMeta #CWAltTextMeta #CWLong #CWLongPost #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta

On the fairy back to civilization it became rainy and windy, so it became time to get the boat/ bus/ train/ train (change at Babylon) home to Manhattan.
to

Okay, das bedeutet, da (streams) seit der Einfhrung der nomadischen Identitt ber ActivityPub letzten Monat mehr Probleme hat, als bisher bekannt waren. Ich habe das mal gemeldet und hoffe, da darauf irgendjemand reagiert.
Ich habe nmlich weder eine Benachrichtung bekommen, noch bist du unter meinen Verbindungen als unvollstndig gelistet.
Hoffentlich hat Mike nicht gerade jetzt Urlaub. Und hoffentlich sieht das auch mein Admin.
#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Streams #(streams)

tldr: Why you should block me now


First things first, reasons why you should block me rather than following me:

What this is, and what this is about


This is not someone's single, general, all-purpose, personal Mastodon account. This channel (not account) was created to specialise in the topics of virtual worlds in general and, more specifically, those based on OpenSimulator. You can consider me not much more than an OpenSim avatar.
Since most of you have probably never heard of OpenSim, and you're wondering what I'm talking about: And
In a nutshell, OpenSimulator is a free, open-source and decentralised server-side re-implementation of #SecondLife, created around Second Life's own viewer API after Linden Labs made the official Second Life viewer open-source. It was launched in January, 2007, and most OpenSim-based world, usually called grids, have been federated with one another since the introduction of the Hypergrid in 2008. One could say that it is to Second Life what Mastodon is to Twitter, what Pixelfed is to Instagram, and what Lemmy is to Reddit, only that the UI can be almost identical.
I've been in OpenSim since April 30th, 2020. By the way, I'm not in Second Life, and I've never been there.
I occasionally post about the Fediverse with which I mean the Fediverse beyond Mastodon. That's when I have to say something that nobody else says.
. Sometimes it's easier to express something in one image macro than in 5,000 words. New meme posts shall go elsewhere in the Fediverse see below.
I don't post about real life. I may occasionally comment posts about real life, but I don't post about it. This channel is not about real life.

Where I am in the Fediverse


Those of you who come across my channel in their Web browsers in search of my profile (which is , by the way), will most likely see it right away. But those who see this post in their Mastodon timelines won't, although the text formatting should be a dead give-away. So it's only fair to mention it here:
I'm not on Mastodon. Yes, I'm someplace that's connected to Mastodon, but I'm not on Mastodon proper. So some of you might learn it from this post: The Fediverse is not only Mastodon.
Instead, I'm using a project named ). It has tons of features that Mastodon doesn't have, including some that are highly requested on Mastodon such as full-text search, quotes, quote-posts, text formatting , magic single sign-on and nomadic identity. It practically doesn't have any character limits at all.
Also, Hubzilla is older than Mastodon. It had its 1.0 release in December, 2015, more than half a year before Mastodon, and it was renamed from a project named the Red Matrix that was launched as early as 2012, about four years before Mastodon. For as long as Mastodon has existed, it has continuously been connected to Hubzilla. Oh, and by the way: Mastodon was the second Fediverse project to adopt ActivityPub in September, 2017. The first one, two months earlier, was Hubzilla.

Other channels


My little in-world sister Juno has her own Hubzilla channel. It's even more specialising in OpenSim from her point of view.
Juno Rowland

In addition, I have another channel on an instance of a nameless Fediverse server application that can be found in , a descendant of Hubzilla by Hubzilla's creator. I have launched that channel to be able to post images that may be sensitive in some way, e.g. that show faces which means eye contact. Hubzilla can't make Mastodon blank them out (streams) can. Again, this channel is in the Fediverse, and you can follow it from Mastodon and anywhere else in the Fediverse.
Jupiter Rowland's (streams) outlet

On the same instance, I have a channel that specialises in posting self-made memes about the Fediverse, based on established and more or less well-known meme templates. This should be clear, but I'd like to mention it anyway: These memes don't suppose that the Fediverse is only Mastodon, nor do they treat Mastodon as the centre of the Fediverse.
fedimemesonstreamsstreams.elsmussols.net

Lastly, I have a blog about OpenSim in German on WriteFreely that's somewhat dormant currently, but I still have a lot to write and post about. is basically Medium in the Fediverse. Again, if you understand German, you can follow the blog from anywhere in the Fediverse. But you can't reply to my blog posts WriteFreely doesn't support comments.


What it means that I'm on Hubzilla


Next to my hashtags and mentions looking weird in comparison to what you're used to on Mastodon, the biggest "side-effect" of this is that my posts can grow truly massive for Mastodon standards. Where Mastodon has a hard-coded limit of 500 characters, Hubzilla does not have any real character limit at all. It has never had one, and its predecessor, Friendica, has never had one either. Thus, character limits of any kind are not part of Hubzilla's culture which is very, very different from Mastodon's culture in many ways.
This means I don't do threads when I have to write a lot. I don't have to. I can put everything I want to post into one single post. Long posts are fortunately still something that Mastodon displays correctly even if you can't write them on most Mastodon instances. As far as I know, it's only above 100,000 characters that Mastodon rejects posts completely. And on Hubzilla, you can even post many times more characters than that.
This post, for example, is longer than 42 Mastodon toots, and as you can see, I didn't break it down into a thread of well over 50 single posts.
That is, if I really have to write something that's akin to a blog post with more than four embedded pictures, while I can do that as a regular post, I'll do it as a long-form article that doesn't federate and then link to it. I know that some of you mobile app users don't like your Web browser popping open, but trust me when I say it's the best solution, also due to what Mastodon does with embedded images which it can't display as such. Besides, I don't force you to tap that link to my newest article.
It's highly unlikely that I'll post anything with that many images, though, because describing each image would be extremely tedious, and the image descriptions would take up horrendous amounts of room in the post. I'll come back to that again further down.

How I handle images


Which takes us to images. It's here where I do acknowledge some of Mastodon's limitations, seeing as well over 90% of the recipients of my posts are on Mastodon, what with how many newbies indiscriminately follow everything they come across to get their personal timeline busy, and others following me with the belief that I'm a Fediverse guru first and foremost.
I no longer post more than four pictures at once in anything that federates into other people's or instances' timelines, streams or whatever. That's because Mastodon can't handle more than four file attachments, and Mastodon removes all embedded inline images from posts.
I still embed the pictures someplace in my posts that is not at the bottom. The bottom is for hashtags which I haven't already used in the text. Yes, I make a lot of use of hashtags for everyone's convenience, and I always write them in CamelCase when appropriate and/or necessary. As for the embedded pictures, sometimes I explain in my posts where which picture that you'll find at the bottom of the post should be where in the text, but I don't always do that.

How I handle alt-text and image descriptions


I'm very serious about image descriptions, so I've been describing all my images for over a year.
However, the topic I post pictures about, virtual worlds based on OpenSim, is very, very niche and very, very obscure. Probably only one in every over 200,000 Fediverse users has even heard of the general topic. This is not even close to common knowledge. So I have to assume that the vast majority of my audience needs it explained, needs everything in my images explained extensively to understand it.
The topic of 3-D virtual worlds is a very visual topic, so it might not be interesting for blind or visually-impaired people. On the other hand, however, they may be excited about the discovery that the Metaverse is not dead, and that free, open-source, decentralised 3-D virtual worlds exist right now and have been since as long ago as 2007. Of course, they'll be curious, and they'll want to explore my images like someone would who can see them. To make that possible and satisfy their curiosity, I have to describe my images at extreme detail.
In fact, I often have to do so anyway when a picture doesn't focus on anything specific.

There is also the rule that any text within the borders of an image must be transcribed 100% verbatim. My images may contain a whole lot of bits of text. And this rule does not explicitly include or exclude text that is not fully readable for whatever reason. So I also transcribe text that can't be read in the image to be on the safe side. This means that my image descriptions may contain lots and lots of text transcripts.
My full, detailed, explanatory image descriptions always go into the post text body, right below the images themselves, and not into the alt-text. They do so for two reasons.
One reason is because they contain explanations and other things that may be useful for anyone, not only for blind or visually-impaired people. But not everyone can access alt-text. Some people can't handle a computer mouse steadily enough to make a mouse cursor hover above an image so that the alt-text shows up. Other people can't use pointing devices such as mice, trackballs, touchpads, trackpoints or the like at all. For example, they may use a pen in their mouth or a headpointer strapped to their forehead with which they can press the keys on the keyboard. These people can't access alt-text either.
For those who can't access alt-text, any information exclusively available in alt-text and nowhere else is completely inaccessible and lost. If it's in the post itself, however, they can access it.
The other reason is because my image descriptions are extremely long. If you as a Mastodon user think 800 characters are tremendously long, think again: My record for the description of one single image is . In words, over sixty-thousand. This is not a typo.
But Mastodon, Misskey and their respective forks such as Glitch, Hometown, Ecko, Firefish, Iceshrimp, Sharkey or Catodon have a hard limit of no more than 1,500 characters for alt-text. Unlike the character limit for posts, they enforce this limit on external content by truncating it and removing all characters beyond the 1,500-character mark. I can post alt-text with 60,000 characters, but Mastodon will chop 58,500 of them off and throw them away. And even Hubzilla's Web interface is limited in how much alt-text it can show at once because it can't scroll through alt-text.
Thus, my long image descriptions always go into the post itself.
Nonetheless, I always write another image description for the alt-text. I have to satisfy those on Mastodon who absolutely demand a useful image description in the alt-text, no matter what. They may not care for there already being an image description in the post. In fact, I always hide these posts behind content warnings (see below), so they don't even see at first glance that there's an image description in the post unless I mention that in the content warning. To keep them from lecturing or sanctioning me for not adding alt-text to an image, I describe all my images twice.
However, due to alt-text being limited in length, I can't make the description in the alt-text as informative as the one in the post. I never explain anything, and I often don't transcribe any text either if it's too much. But the alt-text always mentions the long description in the post, what it contains (explanations, transcripts) and where exactly to find it.

How I handle sensitive content and content warnings


First of all, Hubzilla is vastly different from Mastodon in this regard. Mastodon is a Twitter clone from 2016 that has introduced the use of the StatusNet summary field for content warnings in 2017. Hubzilla is from 2015, and it was created by renaming something from 2012 which, in turn, was a fork of a Facebook alternative from 2010. Hubzilla has never been developed against Mastodon, and it has never tried to mimic Mastodon. It was there long before there was Mastodon. And both its creator and its current maintainer don't want it to ape Mastodon.
This means two things.
One, the summary field which Mastodon has repurposed as a content warning field in 2017 is still a summary field on Hubzilla. It doesn't make sense to give a summary for 500 characters or fewer. But it does make sense to be able to give a summary if you're theoretically able to post millions of characters at once.
So Hubzilla doesn't have Mastodon's CW field, at least not labelled "CW". And Hubzilla's culture was fully fledged and developed when Mastodon was launched in 2016, more so when Mastodon introduced the CW field in 2017, and even much more so when Mastodon exploded with Twitter refugees in 2022.
Putting writer-side content warnings into the summary field (which, again, is labelled "CW" on Mastodon, but not on Hubzilla) is not part of Hubzilla's culture.
Still, I do add Mastodon-style content warnings where I deem them appropriate. Apart from the usual suspects, of which I know hundreds, I add them for:

Two, Hubzilla has its own way of handling content warnings. It is called "NSFW". That's basically a simple word filter which can optionally activated that automatically hides posts behind content warning buttons, depending on the keywords in its word list. The word list is customisable, so everyone can get all the content warnings they desire, given sensitive posts have the necessary keywords or hashtags, and nobody has content warnings forced upon them they don't need.
Hubzilla has had this feature since years before Mastodon introduced its CWs, and Hubzilla has inherited it from Friendica which has had it for even longer.
But in order for these filters to be triggered successfully, a post needs to have the appropriate keyword or keywords in it. This works best with hashtags. This means that I have to double all my Mastodon-style content warnings with matching hashtags. However, in many cases, there is not only exactly one hashtag for the same kind of sensitive content that is universally used by everyone, not even in filters. Thus, there are often multiple hashtags going with the same content warning.
In combination, this leads to masses of hashtags at the bottom of most of my posts as I add hashtags for almost all my content warnings. I know that some Mastodon users have a problem with more than four hashtags in one post, but warning people about sensitive content and triggering their filters to remove or hide said sensitive content is more important than coddling Mastodon users who still have Twitter on their brains.
As for sensitive images, I have recently stopped posting any kinds of images of which I'm certain they're sensitive or triggering to someone on this channel. It is for these images that I've created (streams) channels for (see way above). (streams) can make Mastodon blank sensitive images out. Hubzilla can't do that.

What it means when I follow you back


Most of the time, it means nothing. It means that I let you follow me. It does not necessarily mean that I actually follow you back.
This is due to a technical limitation on Hubzilla. I've set my channel up in such a way that I have to confirm all new connections. However, being a fork of a Facebook alternative, Hubzilla does not treat followers and followed as two separate things. Just like on Facebook, a connection is usually mutual by default. In practice, this means that when I confirm a new follower connection, I automatically "follow them back", i.e. create a mutual connection. This is hard-coded. I can't change it, not unless I let everyone follow me automatically without my consent.
But this does not mean that all your posts actually appear on my stream. If you don't write anything that's within the scope of this channel, I won't allow you to deliver your posts to my stream. Hubzilla has an absolutely staggering set of fine-grained permission controls that makes it possible for me to deny other users the permission to send me their posts.
If you write about OpenSim, I will allow your posts.
If you write about Second Life, I might allow your posts.
If you write about another virtual world that might be interesting for me, I might allow your posts.
If you write about the Fediverse, and you don't reduce the Fediverse to only Mastodon, I might allow your posts.
If you're an in-world acquaintance of mine who doesn't post about OpenSim, I very likely will allow your posts.
If none of this applies, I won't allow your posts. I'll let your comments on other posts through, I'll let your direct messages through, but I won't let your posts clutter my stream.
If I let your posts through, this doesn't necessarily mean I'll also let your boosts through. I can block boosts individually per connection. So unless your boosts are mostly interesting to me, I will block your boosts.
If Hubzilla should ever improve their filters, and I let your posts through, I may still apply a filter that only lets through what I want to read if you post a lot of stuff that I don't find interesting within the scope of this channel.

Finally


If you aren't okay with any of this, feel free to block me now before it's too late. I don't care how many people follow me or can read my posts as long as the right ones can. But I will not change the way I post to make it more like Mastodon, especially not if I can't because something is hard-coded.
Thanks for your patience.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #NotOnlyMastodon #FediverseIsNotMastodon #MastodonIsNotTheFediverse #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #CW #CWs #CWMeta #ContentWarning #ContentWarnings #ContentWarningMeta #Hashtag #Hashtags #HashtagMeta #CWHashtagMeta #Hubzilla #Streams #(streams) #OpenSim #OpenSimulator #Metaverse #VirtualWorlds #Introduction

When researchers gave healthy mice antibodies from patients with , some of the animals began showing Long COVID symptoms
specifically heightened pain sensitivity and dizziness.

It is among the first studies to offer enticing evidence for the .

The research was led by , PhD, Sterling Professor of Immunobiology at Yale School of Medicine (YSM).

We believe this is a big step forward in trying to understand and provide treatment to patients with this subset of Long COVID, Iwasaki said.

Iwasaki zeroed in on autoimmunity in this study for several reasons.

First, Long COVIDs persistent nature suggested that a chronic triggering of the immune system might be at play.

Second, women between ages 30 and 50, who are most susceptible to autoimmune diseases, are also at a heightened risk for Long COVID.

Finally, some of Iwasakis previous research had detected heightened levels of antibodies in people infected with SARS-CoV-2.

From what I've read, colours are best described using basic colours and extra attributes describing brightness and saturation. Even if a colour is explicitly named in the image or otherwise officially named, it's best to use the name and then describe the colour.
Direct quote from my latest image description which I hope is acceptable: "The background behind him is a solid, slightly pale medium blue with a minimal hint of green."
#Long #LongPost #CWLong #CWLongPost #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta






Ship