First of all, you've suggested a few things that the
Open Social Web Fediverse will
revolt against and has actually revolted against in the past.
One, making it depend on centralised services. The "tertiary services" you've mentioned would be just that.
Depending on centralised services is always bad. Whoever runs these centralised services can assume control over the whole Fediverse. It's bad enough to see how Mastodon uses its market share to force other non-corporate Fediverse projects to do things their way.
And if a centralised service that the Fediverse depends on shuts down, this could destroy the whole Fediverse.
Two, commercialising the Fediverse and selling it out to VC vultures and corporations.
Furthermore, your suggestion of, quote, "An API engine for the Fediverse", end quote, will very likely stifle technological development in the Fediverse.
Whichever paid professionals would build it wouldn't even know
half of what the Fediverse can do, and what's planned for the Fediverse. It's very likely that they'll go and ignore all the FEPs because they aren't officially part of the ActivityPub standard.
It's also very likely that they'll build the API against only what they know. And that's semi-corporate Mastodon and the corporate players, namely Threads, Flipboard, Ghost and maybe Automattic/WordPress. Any features offered by anything else in the Fediverse, but not by either of these, would be dead because it wouldn't be covered by the API.
Decentralised identity and, ultimately, nomadic identity via ActivityPub would be stillborn because the API wouldn't cover it. For one, the paid professional devs wouldn't even know it's being worked on. Besides, it's only defined in FEPs, and even these aren't necessarily finalised.
But even if features readily available on e.g. Akkoma or Iceshrimp or Friendica or Hubzilla are being taken into consideration, they may clash with what's needed for Mastodon and the corporate silos. In these cases, it will always be the latter that'll be favoured. Mastodon doesn't want to adapt, and the corporate silos have all the money.
This could kill off entire Fediverse projects because their key features would no longer work if this API is being forced upon the whole Fediverse. Who needs (streams) or Forte anymore if the API doesn't support things like nomadic identity or a permissions system or even only unlimited character counts and full HTML formatting support in Note-type objects
Now, you could think in "market" terms and speak of "consolidation" if all Fediverse projects that don't have an Inc. and a CEO have to shut down. But the Fediverse would lose lots of features this way.
In comparison with other free, open-source Fediverse server applications, Mastodon is actually lack-lustre, underequipped and hopelessly outdated by design. All the features which the non-Mastodon FLOSS parts of the Fediverse have that Mastodon doesn't have would disappear along with the server applications that (used to) support these features.
Also, having the development of this API funded reeks a great lot of venture capital. Which means an expected ROI of 10:1 or higher. It has to pay off. This would force the whole Fediverse, the vast majority of which is non-profit and non-corporate now, under the dictatorship of commercialisation and corporatism. And the whole API will be optimised for maximum revenue and not for best functionality within the Fediverse as it already exists right now and as it is planned by non-corporate entities to develop in the future.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Corporatism #
CommercialisationI'm not interested in connecting to the Bridgy Fed Bluesky bridge. In fact, I'm not interested in Bluesky at all.
If a substantial amount of users was on Bluesky and actually
active there, that'd be another story. But such people are just about the only ones whom I seek out and connect to myself. After all, this is not a personal all-purpose channel. This is a channel that's limited in scope and topics.
I don't need followers from Bluesky either. I don't think I've got that much to tell them. Bluesky can't handle posts with over 300 characters, but most of my posts exceed the 1,000-character mark, and many go even much much further.
What would Bluesky users follow me for anyway I only write about three topics. One, OpenSim. Maybe jabs at other virtual worlds. Next to nobody in the Fediverse has followed me for this, so why should someone on Bluesky, especially seeing as
all my posts will be chopped to a tiny fraction of their original length Two, the Fediverse beyond Mastodon. Not interesting for Bluesky users. Three, accessibility in the Fediverse. Even less interesting for Bluesky users.
My over 500 connections are a lot already. They're getting hard for Hubzilla to handle, I guess. I don't need
that many more. I mean, the vast majority of connections that were established by the other side first don't have permission to send me their posts anyway because I want to limit the uninteresting off-topic clutter on my stream. For unlike Mastodon where you can simply scroll down one hour worth of posts and be done with it, Hubzilla has an unread activities counter from which you can access them all.
Chances are slim that someone who starts following me from Bluesky posts about OpenSim. Chances are
nil that someone who starts following me from Bluesky posts about the Fediverse beyond Mastodon or accessibility in the Fediverse. And people who post about neither are very likely to have no permission to send me their posts.
As for Threads, I won't connect with anyone there. To my best knowledge, the Hubzilla hubs with both my main channel instance and my clone are blocked by Threads. That's because public Hubzilla hubs cannot comply with Threads' rules, and at least these two particular hubs are big enough for Threads to care and take countermeasures.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Bluesky #
BridgyFed #
BlueskyBridge #
Meta #
MetaPlatforms #
Threads First of all, if I follow someone, I usually connect to them first.
If someone connects to me first, I have to "follow" them "back" in order to confirm their connection request. Hubzilla doesn't distinguish between followers and followed, Twitter-style. It only has bidirectional connections, Facebook "friends"-style. So "following" someone "back" is a technical necessity to give them permission to receive my posts or, in Mastodon lingo, follow me.
But even then, I don't necessarily
really follow them back. Unless they really post things that are interesting within the two topics of my channel, I don't give them permission to send me their posts. Comments, yes. DMs, yes. But posts, no. Another Hubzilla thing. So when I "follow" most people "back" who follow me, I don't
actually follow them.
With that out of the way: I only follow those first who either post about 3-D virtual worlds, preferably OpenSimulator-based, maybe also Second Life, or whom I know from OpenSim. And I only follow those back to whom either any of this applies, or who largely post interesting things about the Fediverse outside of Mastodon, or maybe also those who post about other 3-D virtual worlds that aren't just crypto scams.
Of all these people, hardly anyone ever describes their images. I'm one out of probably only two people from the Second Life and OpenSim bubbles in the Fediverse combined who add useful image descriptions to the alt-texts of their images. If I refused to follow anyone who doesn't add alt-text to their images, I'd have to stay out of these bubbles almost entirely.
As for the "Fediverse beyond Mastodon" crowd: Let's face it, alt-text is only a thing on Mastodon and nowhere else in the Fediverse, maxxing out quality, accuracy and detail level of alt-text even more so. If I refused to follow anyone who doesn't add alt-text to their images, I pretty much couldn't follow anyone who talks about the Fediverse outside Mastodon.
Basically, if I refused to follow anyone who doesn't add alt-text to their images, the only Fediverse actors whom I'd give permission to send me posts would be a few who post about image descriptions in the Fediverse plus three more channels of my own. I'd even have to exclude the inner circles of Hubzilla and (streams). I'd be down to under 500 connections again, only five of which would be permitted to send me posts.
And no, I won't police my own connections and pressure them into describing their images. If I did that, it'd look like I tried to pressure them into writing (link: one image, 1,400+ characters of short description in alt-text, 60,000+ characters of long description in the post).
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Hubzilla #
AltText #
AltTextMeta #
CWAltTextMeta I myself always describe my original images twice, once in the alt-text, once in the post where I don't have any character limit.
But: These are two different descriptions. The one in the alt-text is the short image description. It never grows significantly longer than 1,400 characters, usually not longer than 1,200 characters or so because I need the rest of the character limit imposed by other projects for something else.
This one never contains any explanations because explanations must never go into alt-text.
The one in the post is the long description. It always contains all text transcripts, and it also contains all necessary explanations. The explanations alone can take up several thousand characters combined, and the whole long description is often tens of thousands of characters long. This means it is not identical with the one in the alt-text.
Do you let this count Or should I put
- the short alt-text description into the alt-text
- the short alt-text description into the post as well
- additionally, the long description into the post, right below the short alt-text description
As for my meme posts which are almost always based on established meme templates, I currently only describe their visuals once, in the alt-text, and in much fewer characters than my original images, so far always under 800, usually under 700. I hope that doesn't fall under "really long" yet. In addition, I put necessary links and/or explanations into the post for people to understand the image.
#
Long #
LongPost #
CWLong #
CWLongPost #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta That's what alt-text guides for static websites, blogs and commercial social media silos tell you. The reason why I was asking in the first place was because Mastodon tends to think differently.
See, these alt-text guides keep preaching to keep alt-texts short. Mastodon, on the other hand, loves long and detailed image descriptions, even in alt-text.
#
Long #
LongPost #
CWLong #
CWLongPost #
AltText #
AltTextMeta #
CWAltTextMeta #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta Not everything in the Fediverse supports following hashtags. For example, Mastodon does, Hubzilla (from where I'm posting right now) does not, (streams) (where I have two more channels) does again.
Still, searching literally the whole Fediverse is technologically impossible, watching the whole Fediverse for things to happen even more so. Your instance would have to know absolutely every Fediverse instance out there. All of them. No exception.
This means it would have to know in real-time when someone spins up a new instance of whatever for the first time, be it Mastodon, be it CherryPick, be it (streams) or Forte, be it the first private pre-alpha of something they've just developed from scratch. It would have to know because it'd have to immediately start indexing everything that happens on that new instance.
And there is absolutely no way of being notified about that. Even if
every Fediverse server application implemented such a notification feature, where would it send that notification To all running Fediverse instances out there And how is a brand-new instance supposed to know all Fediverse instances out there
before it spins up for the first time so that it can notify all of them
when it spins up
I think I'll have to write an article on this, as often as the question pops up why there is no search that covers the absolute entirety of the Fediverse.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Search Truth be told, you'd have to enlarge the images in question by the factor of two to five based on their original resolution, depending on the colour, to actually recognise the herringbone pattern on the jacket.
I just don't want to omit details in my descriptions that someone may need to know. And I don't want to be called lazy, neglectful or even ableist because people have to look up information that I could have included in at least one of my two image descriptions myself.
I mean, the image description standards are way higher on Mastodon than anywhere else. People get used to them, and I have to keep up with them, if not even be ahead of them in order to not fall behind.
CC:
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Ableism #
A11y #
AccessibilityMaybe someone can answer me this accessibility question, especially blind or visually-impaired users as well as Mastodon's alt-text police, just be safe:
When I mention in an image description that a 3-D avatar is wearing a herringbone-patterned tweed jacket and full brogue shoes, do I have to describe in detail what the herringbone fabric pattern and full brogue shoes look like, in general and/or specifically in this case
I mean, I will mention the colours of these jackets and shoes, I will mention the size of the herringbone pattern, and I will mention that at least the herringbone pattern does not come with any actual surface structure.
But do I have to explain and describe what a herringbone pattern is And do I have to give a detailed description of the shape and the structure of the toe cap that defines a full brogue shoe in general and
these particular full brogue shoes in particular And if I do so, do I have to also explain the relevant parts of a shoe (main body, toe cap, lace panels, heel cap)
Or don't I have to describe all this because nobody will need to know it anyway Because it's commonly known Because it doesn't matter Or because absolutely nobody actually cares Can I actually get away with name-dropping "herringbone pattern" and "full brogue shoes"
(For the record: Such detail descriptions will probably not go into the alt-text. They will rather go into a long image description in the post itself where I don't have any character limits to worry about.) #
Long #
LongPost #
CWLong #
CWLongPost #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Blind #
VisuallyImpaired #
Inclusion #
A11y #
Accessibility A few more details:
* FEP-ef61: Portable Objects
Invented in, I think, 2023 by for (based on ActivityPub). Currently implemented there and in
Mike Macgirvin 's and . Part of the plan to introduce almost Nomad-level, but cross-project nomadic identity to ActivityPub.
* FEP-61cf: The OpenWebAuth Protocol
Invented in 2018 by Mike Macgirvin for Zap (Zot6 development platform discontinued 2022). Backported to Hubzilla in 2020. Full server-side and client-side implementation only in Hubzilla (based on Zot6, also supports ActivityPub etc.), (streams) (based on Nomad, also supports Zot6 and ActivityPub) and Forte (based on ActivityPub). Friendica has a client-side implementation. Mastodon has a client-side implementation pull request that has to be merged eventually.
CC:
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Friendica #
Hubzilla #
Zap #
Streams #
(streams) #
Forte #
Zot #
Zot6 #
Nomad #
ActivityPub #
FEP #
FEPef61 #
FEP61cf #
DecentralizedIdentity #
NomadicIdentity #
OpenWebAuth #
SingleSignOn AFAIK, there are only three implementations of FEP-ef61. I'd call two of them barely not experimental anymore at best, and the third one, the only one that actually seems to rely on it because it places all bets on nomadic identity via ActivityPub, is not really open to the public yet.
FEP-ef61 itself seems to be finalised (don't take my word for it), but (intentionally left the DID in the link). So it may be better to watch the other implementations for a while.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
DecentralizedIdentity #
FEPef61I've only imposed a few limitations upon my own descriptions of my original images.
One,
I don't describe anything that's entirely outside the borders of the image. The only exception is the position of the Sun if the (virtual) sunlight creates visible shadows.
Two,
I don't describe anything that's entirely concealed behind something else. And I limit my descriptions of anything that's partially concealed. Exception: I still transcribe any text that's partially concealed if I can source it.
Three,
I don't describe any surfaces that fully face away from the point of view.Four,
I no longer give detailed descriptions of images within my image that go beyond what these images themselves show in-world. If details are revealed by looking at the scenery in-world as opposed to the image with its limited resolution, I describe them. If they are revealed by walking or moving the camera within the scope of the image itself, I describe them. But if details are only revealed by walking or moving the camera farther away or even by teleporting, then I won't describe them anymore.
The latter limitation comes from . I had a preview image on a teleporter, an aerial view that didn't even show any actual details of the destination. If it had shown more details, it would have revealed hundreds upon hundreds of single-destination teleporters, dozens of them with their own detailed preview images and even with images within these images. In this preview image, however, each one of these single-destination teleporters was not even the size of one pixel. They fully blended in with the ground.
In over 4,500 characters each. After both, I actually had to tell my readers which description level I'm returning to, that I'm getting back from describing an image within the image to describing the image itself. The longest ones of these sub-sub-descriptions barely exceed 1,000 characters, though.
#
Long #
LongPost #
CWLong #
CWLongPost #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Inclusion #
A11y #
Accessibility "Einfach machen" ist leicht gesagt. Ich neige da zu extremem Detailgrad. Und da ist es mit "einfach" vorbei.
"Geruschbeschreibung" klingt fr mich so, als mte ich jedes Gerusch in einem Video oder einer Audioaufzeichnung so detailliert beschreiben und erklren, da Gehrlose es mit demselben Detailgrad wahrnehmen knnen, wie studierte und jahrzehntelang erfahrene professionelle Toningenieure mit ihrem trainierten, hochanalytischen Gehr es ber High-End-Studiomonitore hren.
Da gbe es dann z. B. ein Klickgerusch, das einfach eine Art Systemklang einer Anwendung ist. Das wre nur eine Zehntelsekunde lang, aber ich wrde mehrere Stunden brauchen, um es zu beschreiben, und du wrdest mehrere Minuten brauchen, um die Beschreibung zu lesen.
Ich wei berhaupt nicht, wie ich jemandem, der komplett ohne jegliches Gehr geboren wurde, eine realistische, detaillierte Vorstellung davon geben soll, wie ein synthetisch erzeugtes Gerusch klingt.
Bei visuellen Beschreibungen ist es ganz hnlich. Wenn ich meine eigenen Bilder poste, . Ich bin der Ansicht, .
Wenn ich jetzt aber ein Video machen wrde und das beschreiben mte, wrden die Beschreibungen noch extremer ausfallen.
Ich mte ja jedes Video dreimal beschreiben.
Zunchst einmal htte ich natrlich den Untertitel, der neben meinen gesprochenen Kommentaren auch smtliche Gerusche erwhnt und detailliert beschreibt. Eine Stelle mit vielen Geruschen knnte gehrlose Zuschauer berfordern, weil ich soviel auf einmal zu erwhnen und beschreiben habe. Die Alternative wre, das Video an der Stelle automatisch zu pausieren, damit das Publikum gengend Zeit hat, die Beschreibungen zu lesen, und dann auch zu erwhnen, da whrend der Pause alle Klnge abrupt aufhren und nichts zu hren ist.
Dann htte ich die Audiodeskription fr Sehbehinderte. Die mte grundstzlich so detailliert ausfallen wie meine Bildbeschreibungen. Nur htte ich es mit sehr viel mehr Material zu tun, nicht nur Bildmaterial, das sich ja stndig verndert, sondern auch alle Kamerabewegungen mssen im Detail beschrieben werden.
Das wrde bedeuten, da ein Video dutzende Male abrupt pausiert und dann minutenlang im Standbild verharrt, weil wieder z. B. ein neues Gebude im Bild aufgetaucht ist, das ich noch nicht beschrieben habe, und ich jetzt dieses Gebude beschreiben mu.
Wenn ich ein zehnmintiges Video htte, wrde ich mehrere Wochen brauchen, um die Audiodeskription zu schreiben. Und die Pausen in der Audiodeskription wrden das Video in seiner Version fr Hrgeschdigte auf sechs Stunden aufblasen.
Eigentlich bruchte ich dann auch noch ein komplettes Transkript fr Taubblinde, also im Prinzip die Audiodeskription plus die Geruschbeschreibungen aus den Untertiteln plus Timecode. An dieser Stelle wren wir schon locker bei acht bis zehn Stunden Lesezeit fr zehn Minuten Video.
Da htte ich aber noch ein anderes Problem: Taubblinde interessieren sich nicht unbedingt so sehr dafr, wie etwas aussieht, sondern mehr dafr, wie etwas riecht und wie sich etwas anfhlt, wenn man es berhrt. Nur wie soll ich beschreiben, wie sich etwas anfhlt, wenn man es berhrt, das man gar nicht berhren kann, weil es nur im Computer existiert, und was daher auch keinen Geruch hat
Ich knnte natrlich jedes Mal schreiben, wie etwas riechen knnte und wie es sich anfhlen knnte, wenn man es berhren knnte, da es aber tatschlich geruchslos ist und auch nicht berhrt werden kann, weil es ja keine physikalische Existenz hat und auch nur eine bildliche Darstellung eines realen Materials ist. Aber zum einen wrde es irgendwann 15 bis 20 Stunden dauern, die Beschreibung eines zehnmintigen Videos zu lesen. Und zum anderen, wenn taubblinde Leser so eine Formulierung zum 50. Mal zu lesen bekommen, dann wird es irgendwann nicht nur langweilig, sondern nervig.
Schlufolgerung: Gewisse Inhalte lassen sich nicht adquat barrierefrei darstellen.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
Bildbeschreibung #
Bildbeschreibungen #
BiBesch #
BildbeschreibungenMeta #
CWBildbeschreibungenMeta #
Untertitel #
Audiodeskription #
Audiodeskriptionen #
Transkript #
Transkription #
Inklusion #
Barrierefreiheit a virtual environment that resembles a large, open, circular structure with a glass dome ceiling. Inside, there is a mix of natural and industrial elements.
Yeah, but what do they look like
A person stands in the foreground, waving with one hand. They are dressed in a dark suit.
Gender known, if so, which gender Body size and shape Rough age range Skin tone or race Hair length and style Hair colour Facial hair Colour of the suit Material of the suit Footwear Any accessories Facial expression
Behind them, there is a tree with purple leaves, and several tables and chairs are arranged in a casual seating area.
Again, what do they look like
On the walls, there are two large screens displaying the letter "M" in a circle, along with other text and graphics.
Text What does the text say Why isn't it transcribed And the graphics, what do they look like
As for the extra details of the avatar description and the text transcripts, the questions simply follow several alt-text guidelines out there and what they demand.
There are certain things which, to my understanding,
must be mentioned about a person in an image, especially if there's only this one person. And again, there is also the rule that if there is text in an image, it must be transcribed verbatim. Simply mentioning that there's text in an image and then not transcribing it comes over as not only lazy, but outright ableist, probably not only here in the Fediverse.
The "what does it look like" questions, in the meantime, are likely to come from blind or visually-impaired users who genuinely don't know what these things look like, but who'd like to know. After all, sighted users can see what these elements look like, and blind or visually-impaired users must have a chance to experience such an image in all the same ways as sighted users.
And I'm still not sold on the idea that all non-sighted Fediverse users know what image resolution is, and that they accept something being too small in an image to be recognisable as a valid reason to not describe it instead of an excuse to weasel yourself out of describing it.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
A11y #
Accessibility I know that decision tree, and it doesn't really work for my original images because it's limited to what you'd usually find on professional/commercial static websites or blogs with a very small choice of kinds of images.
Let's take as an example here. It was the first image I've described in detail. I'm not going to link to that description because it's hopelessly outdated and probably terribly lacking at only a bit over 13,000 characters, and the image does not have a descriptive alt-text yet. It's an old shame, so-to-speak.
Does the image contain textLet's say, what the image shows within its borders has 22 bits of text on it, for any definition of text. This could count as yes.
However, of these 22 bits of text, only three are legible in the image as it is, at the resolution at which I've uploaded it: three times a capital M. As I've said, for any definition of text. The other 19 are so small that they're illegible, or they are so small that they can't be identified as text, or they're so tiny that they're invisible at this resolution. The big black sign in the middle with the yellow writing on it has a tree in front of it.
Is it still a yes because, while it isn't readable, there still is text
Is it a yes for the three capital Ms
Is it a no because the text does not show itself as text in this image at this resolution So technically speaking, with the exception of the three capital Ms, there is no text in this image because where there's text in the original, there's just some blurry mush that does not qualify as text in the image as shown
Or is it a no because I can't transcribe it anyway if I can't read it Fun fact: I
have transcribed all this text. 100% verbatim. And provided translations for everything that isn't English. So "you can't read it anyway" doesn't count because I
can read it.
Now comes the kicker: If we pick yes because there is text in the image, the decision tree implies that it is
an image of text and nothing else, and that the image-describing process is over after transcribing the text. This wouldn't even work with an image macro.
Does the image contribute meaning to the current page or contextDoes this question even work in this case
Let's say the post in question is about the world where the image was taken shutting down soon and my avatar in this world disappearing. Because it was when I first posted this image. It's just meant to be a last farewell.
It doesn't add any extra information. This is not a post in a professional commercial or scientific or technological blog. So, does the image contribute meaning, yes or no
If so:
- It's not "a simple graphic or a photograph". It's a digital 3-D rendering, and it's anything but simple.
- It's not "a graph or complex piece of information". Complex, yes. But it isn't a graph, and it isn't a piece of information of the kind you'd have on a scientific website.
- I don't think it "shows content that is redundant to real text nearby".
Other options aren't available.
Is the image purely decorative or not intended for usersThat's debatable. On professional websites and blogs, this question may make sense. In social media where nobody adds decorative images to posts, it doesn't. In the Fediverse which has way higher accessibility standards than or Facebook or Tumblr or Threads or Bluesky or LinkedIn, it makes even less sense.
Is the images use not listed above or its unclear what alt
text to provideProbably.
The top of the article is already a dead give-away: This guide is not meant for social media. Not for the big commercial silos, and even less for the Fediverse where Mastodon re-defines what makes an image description good. Not even two dozen people in the world use social media that support HTML
<img>
tags in posts.
While professional Web accessibility experts will throw their hands up into the air in utter outrage over 250 characters of alt-text because it's too long, Mastodon users celebrate alt-text that's four times as long.
So this is a case where a whole bunch of edge-cases unhandled by the WAI meet in one place:
- A situation in which not the audience comes to the content, but the content comes to the audience without the audience necessarily explicitly seeking out this kind of content.
- Social media with possibilities that vastly exceed those of the big commercial silos, especially regarding character count, while at the same time mostly not supporting full HTML.
- An audience which has defined its own accessibility "rules". Including blind or visually-impaired people who do want to know what something in an image looks like, even if the WAI alt-text guidelines forbid describing it.
- Content that has never been taken into consideration by any alt-text/image description guide out there. Ever.
Trying to force this into the WAI or WCAG guidelines is akin to trying to push a square peg into a round hole that was drilled by people who think all pegs are round.
I mean, as you've just seen, I can't even clearly answer any of the questions in the decision tree.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
WebAccessibilityInitiative #
WAI #
WCAG #
A11y #
AccessibilityThese autoantibodies are becoming a prime target and explanation for the diverse symptoms related to long-term post-infection effects, collectively called COVID. Of note, they may be (temporarily) re-triggered upon re-infection or vaccination. That would be important.
10/10
I don't know for certain what the majority want in general. And I don't know what they'd want in my very specific case.
I can only try and extrapolate what they might want from other Fediverse users' image descriptions and feedback on other Fediverse users' image descriptions, as little as there is.
The problem with this is that I don't post what everyone else posts. Not real-life photographs, not screenshots from social media etc., but renderings from very obscure 3-D virtual worlds. This means that there is next to nothing in my images that anyone is familiar with and that any blind or visually-impaired user has a rough idea what it looks like.
I've seen real-life photographs, sometimes literally focusing on one specific element in them with the whole background blurred out, that were described in over 800 characters. I've seen them be praised for their alt-texts. On the other hand, I've never seen a real-life photograph in the Fediverse be criticised for too much alt-text.
This, however, doesn't easily translate to virtual world renderings. Real-life photographs are much more familiar and much more likely to mostly contain things that people are more or less familiar with. And yet, people love it when they're described in 800 characters and more, all the way to replying with hashtags such as #
AltTextAward or #
AltTextAwards or #
AltTextHallOfFame.
Logical conclusion: If there's more in the images that people aren't familiar with, I'll have to describe more than in these real-life photographs. And there is more in the images that people aren't familiar with.
Virtual world renderings are a largely unexplored edge-case. Only very few people in the Fediverse post these. I think only two describe them. And I'm the only one who really puts some thought into describing them instead of trying to get away with the bare minimum. This means that what I'm trying to do is a first. Nobody has done it before me. There's no prior experience with it.
Thus, I have to go with my own assumptions and conclusions based on a) observations on Mastodon and b) the differences in familiarity between real life and what I post about.
Three things are clear about my images.
First, if sighted people see it, they don't really know what it is, where it is etc.
Second, if non-sighted people come across the image, there is nothing in the image of which they know what it looks like due to having been told often enough what it looks like because they've
never been told what
anything in the image looks like. But they may want to know what it looks like. And it's their right to know what it looks like.
Third, this topic is such a small niche and so extremely obscure that if you don't know something, you can't just look it up on Wikipedia. You can't even Google it. Generally, the only source of information that could really help you with my pictures, that's me. I'm definitely the only way "to get the larger details and nuances".
And so there's much more in my images that needs to be described. And there's much more that needs to be explained, one of the reasons why I always describe my virtual world renderings twice.
This starts with the location, the place where an image was taken. There are cases in which it does matter where an image was taken. My virtual world renderings are such cases.
If a real-life location is shown in a photo, sighted people may recognise it because it's so famous. Otherwise and for non-sighted people, simple name-dropping is usually sufficient. There's hardly any place in real life that can't be sufficiently mentioned in under 50 characters.
I can't name-drop. It won't tell anyone anything because nobody would know the name I've dropped. If I want to tell people where an image is from, I'll first have to break it down and then explain it and explain the explanation and so forth. I can't tell anyone, sighted or not, where my images are from in under 1,000 characters. Not if I want them to understand it.
As for visual descriptions, the usual advice is to limit yourself to what's important within the context, describe only the one important element in detail and hint at everything else at most. But I don't always have that one important element. I may have about two dozen important elements. Or, more often, the post is about the whole image, the whole scenery, and everything in it is important just the same.
But even if something in the image is more important than something else, I still have to describe everything. I mean, we're talking about what amounts to glances into a whole new universe for 99.999% of all people out there. Sure, many will shrug it off.
Others, however, may be intrigued, curious even. After all, this is evidence that "the Metaverse" is, surprisingly, alive. It is not suggested in AI-generated pictures. It really exists. And it looks better than all of Zuckerberg's propaganda. These people don't care what matters in the image and what doesn't. They go on an exploration trip all across the whole image and take in all the details.
Blind or visually-impaired people can't do this. But they may want to do it. And they've got the right to do it, just like sighted people. So they should be given just the same opportunity to do it. Remember that I can't assume that they know what
anything in the image looks like unless there's a real-life counterpart that looks very much the same.
Whenever there's something in one of my images that doesn't exist in real life, I have to assume that nobody knows anything about it. So not only do I need an extensive visual description, but I often also need an extensive explanation of this one item.
Finally, there's one more thing that takes up a lot of room: text transcripts. The rule is that if there is text within the borders of an image, it must be transcribed. I rarely even see the exception "unless it doesn't matter within the context". And, again, it tends to happen that
everything in one of my images matters within the context because the context is the very image itself.
What this rule doesn't cover at all is text that is unreadable in the image as it is shown. There is no exception for this kind of text, nor is it explicitly included in this rule. It isn't handled at all. It has never even been thought of. Hence, I must assume that the rule applies to this kind of text just as well.
Before you say that I can't transcribe text that I can't read: I actually can. I don't transcribe text by looking at it in the image. I transcribe text by looking at it in-world. And all of a sudden, those six pixels in a row that are ever so slightly more greenish than the surrounding white are two lines of text. That blot, four pixels wide, three pixels high, is actually a sign with a 1024x768-pixel texture and text that's perfectly legible. That tree trunk in front of that sign In-world, I can look behind it.
If I can transcribe all this text, and nothing says I must not do so, I assume I must do so. And so I may end up with several dozen transcripts of more or less text which, including their respective contexts in the image description, take up more characters than fit into a Mastodon alt-text. If this is the case, then the text transcripts must go into the long description in the post rather than the short description in the alt-text.
This is not by user request. This is an accessibility rule that I follow.
Now you may say that I don't have to deliver such an enormous infodump at once on a silver platter, whether people want it or not. You may say that they could always ask if they want something more.
But seriously, this is about accessibility. And if people have to ask and then wait for assistance, it isn't accessible. They could just as well ask for the whole image description, and if they don't, I don't have ot write it. It wouldn't make much of a difference.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
A11y #
Accessibility That's part of what I mean.
I could sit down for two solid days, morning to evening. Describe an image twice over. 1,400 characters of short description in alt-text. Over 60,000 characters of long description in the post, including all necessary explanations and all transcripts of all text within the borders of the image because they wouldn't fit into the alt-text.
You may think this is completely outlandish, but I've actually done exactly this before. Just search for the hashtag UniversalCampus and scroll all the way down that's the post I'm talking about. The original is here: .
For the third group, I guess it's what they need. Granted, when I mention that there is a preview image of another place on a teleporter, they may still be disappointed that I didn't give a detailed description of this image or even of the place itself, regardless of image resolution.
The first group, those who want short descriptions, will complain that it's too much, and that not providing actually short descriptions of no more than 200 characters in the alt-text is ableist because longer alt-text is too inconvenient.
The second group, those who want whimsical descriptions, may start having the alt-text read to them, stop after a few seconds because it's so bland, fire up their AI which hasn't the foggiest idea what it's even looking at and listen to a description that's half-nonsense, but nicely whimsical. They don't know that it's half-nonsense, and they don't care because what the image shows is not a matter of life and death that requires accuracy. Oh, and of course, they won't even bother with the long description in the post. Not because it's so long, but because it's probably just as boring.
On the other hand, if I catered to the first group, those who want image descriptions to be short, I'd write a very short alt-text of no more than 200 characters that doesn't actually describe anything.
Then I'd have to deal with blind or visually-impaired people inquiring, "Yeah, but what does everything look like" The question is justified. How are they supposed to know what such a place in a 3-D virtual world looks like and what anything in this place looks like So whatever elements are in this picture, these people would ask for detailed visual descriptions. Whenever I reveal to them what else is in the picture, they'd want detailed visual descriptions of it. And if they don't understand something, they'd ask for enough detailed explanations until they understand it.
In the end, they'd ask me why I didn't give all this information right away, why I'm so lazy and careless that they'd have to take matters into their own hands and squeeze this information out of me.
Of course, the second group, those who prefer whimsy over information, would ignore this description as well and substitute it with an AI description that's funnier.
If I catered only to this second group and simply used an AI description as it comes out of the AI, I'd end up telling nonsense and outright lying about my own images even though I should know better. But the second group doesn't care for accuracy.
The first group would dislike the description for having too much cruft in it and being too long. The third group would criticise it for not being informative enough, and I would have to explain to them that the description came from an AI, that it's factually half wrong, and I'd have to tell them what the image really shows. Lastly, Mastodon's alt-text police would scold me for the inaccurate rubbish that's my image description. Again, although I could have done better.
The last option would be to cater both to the second and the third group and make it accurate and informative and detailed and at the same time whimsical and entertaining. The workflow would start the same as my actual workflow.
But on top of the two days I need to research for and write the descriptions, I'd need another three days to make both descriptions funny and quirky. The long, detailed description would grow from 60,000 to 80,000 or 90,000 characters due to all the whimsical bits added to the same amount of information. The short description would still be limited in length, and so I'd have to sacrifice information for more whimsy.
In the end, I won't satisfy anyone. For the first group, it'd still be too long. For the second group, chances are that the alt-text is actually too long, because which AI churns out 1,400 characters of image description If not, it probably isn't the right kind of whimsical, not the kind of whimsical that AIs produce. And all the whimsy couldn't rescue the 90,000-character long description from being too long. And the third group would say that I should have added less cruft and more information to the alt-text, and that all the uninformative cruft has blown the long description out of proportion.
I guess it's obvious that I can't satisfy everyone. That'd require a hyper-detailed description of a highly complex image, including transcripts of all bits of text, within no more than 200 characters which, at the same time, is also funny and quirky. In addition, I'd have to find a way to explain things which the image description reveals and identifies while still making these self-same explanations, including their context, available to people who can't access alt-text.
And as this is completely impossible, perfectly satisfying everyone all the same is just as impossible. In order for one person to be satisfied, another will have to suffer.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
A11y #
Accessibility My main issue is with image descriptions because there are very different requirements that actually contradict each other.
It's usually said that images should be described in an alt-text that's clear, concise and, most of all,
short. And only in that alt-text. And there are many blind or visually-impaired users who want alt-text that way.
Then there are those who are bored from purely informative image descriptions because they're so bland. They prefer AI descriptions over human descriptions because they'd gladly trade accuracy which they can't verify anyway for whimsical entertainment.
Well, and then there are those who love details, and who need everything described that they don't know what it looks like. Mastodon loves detailed image descriptions, much more than the general Web or other social media. But my original images tend to be chock-full of only stuff that next to nobody has ever seen before, and that these people don't know what it looks like. They're an edge-case that nobody thinks of when wishing for detailed, informative image descriptions.
I regularly spend hours or even days describing one image beyond what its resolution shows. Explaining in an image description that something isn't visible in detail because the resolution is too low or omitting it entirely for this reason feels too much like laziness and weaseling out. The result is a very long description in the alt-text already and an additional, outright massive description in the post.
Both are as informative as they can get, the short description within the limits of what Mastodon, Misskey and their forks don't cut off yet, the long description within the limits that I do impose on myself. But even the short one in the alt-text is too long for not few blind or visually-impaired users. And the long one in the post sometimes exceeds the length of an essay and grows as long as a short story.
I'm caught between describing too little for some users and making descriptions too long for others.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
A11y #
AccessibilityThese -lasting come in various formulations, each designed for specific uses:
General-Purpose
This type is suitable for most common applications. It offers good to a variety of surfaces and is often used for repairs and basic .
, which is what I'm commenting from right now, doesn't provide any technical means to add alt-text to profile pictures or channel pictures. Images in posts or comments or direct messages, also images in articles or on webpages or on wiki pages, yes, but not profile pictures or channel pictures.
But if it did, my descriptions would be concise, but certainly not short. My profile picture is not a photograph of a real-life person. It is a digital rendering of an avatar in a very obscure 3-D virtual world. I've described quite a number of renderings from such virtual worlds. And they always end up massive beyond any imagination. There simply is so much that needs to be described and explained because I can't expect it to be known.
And as it's very uninclusive to explain stuff in alt-text (not everyone can access alt-text), I need two descriptions for each image. A shorter but still very long one with no explanations for the alt-text and a long and detailed one with explanations to go elsewhere where it isn't too hard to find.
Take my profile picture, for example. It's past 11:30 PM right now where I am, and I won't be done describing it before 2 AM. So I can't give you an example.
And that'd be the
short description which probably won't include the comet and the fireworks and other small details, and which definitely won't include any explanations (the location, what OpenSimulator is, what Second Life is, what a grid is, what a region is, what a sim is, what a varsim is, what the Wolf Territories grid is, what the Wolf Fest is, what Hubzilla is etc.) because explanations don't belong into alt-text.
I mean, I can't expect blind people to know what the planet Jupiter looks like, and I
definitely can't expect them to know what the old Hubzilla logo with the house around it looks like. But I
must expect them to want to know what both look like. Thus, descriptions of both are due.
As for my channel picture, the one with the huge, weird-looking building: (Alternatively,
UniversalCampus it's the post at the very bottom.) It took me two full days, morning to evening, to describe it. The alt-text is crammed to the last character within what Mastodon, Misskey and their forks allow, i.e. exactly 1,500 characters, a bit over 1,400 of which are image description. The
long description in the post, hidden behind a summary and long post content warning, is over 60,000 characters long. Mind you, that isn't all cruft. Stylistically, it's actually pretty bland and bare-bone. It's just one huge infodump.
It's there where all explanations went, as well as all text transcripts. And I actually had to limit myself in comparison to how I used to describe images another few months earlier: I didn't give any detailed descriptions of more elaborate pictures within the picture, especially not beyond the actual in-world resolution of these pictures.
The problem with my images is that I can't make them perfectly accessible to everyone out there. I have to throw someone under the bus to make them accessible to someone else. And before I end up being called lazy or sloppy, I prefer to err on the "too much" side, also because I can't exclude someone actually needing to know all this.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
A11y #
AccessibilityFarmworkers come from all over to demand their rights, oftentimes losing sleep and sacrificing their limited time off of work.
face : , horrifically , conditions. But they are , and you can with them!
To many Mastodon users, I guess every other Mastodon user, the Fediverse is only Mastodon. To not few others, the Fediverse is Mastodon plus some stuff glued onto Mastodon. Or it's Mastodon and Threads. Either way, it's regularly perceived as nothing but a microblogging platform with a very limited character count and otherwise hardly any features beyond Twitter, features such as text formatting.
And, of course, your identity is always firmly tied to your account and thus one specific server instance. That's a fact. Allegedly.
Few people know how far it extends outward and away from Twitter clones and Twitter alternatives, much less what the Fediverse beyond Twitter clones and Twitter alternatives is like.
And it's way out there where
Mike Macgirvin has been active since 2010. Five and a half years before Mastodon came out. Seven years before ActivityPub was first adopted, and even that was by one of Mike's creations.
The Fediverse is not defined by what's connected via ActivityPub, and especially, it is not defined by what's
based on ActivityPub or what
only uses ActivityPub. ActivityPub was officially established as a standard in 2018. It was first adopted in July, 2017, by Hubzilla, followed in September, 2017 by Mastodon which was entirely based on OStatus until then.
But the term "Fediverse" has first been used as early as 2012 which means that there was something back then that was decentralised and federated, even without ActivityPub. And that "something" already had features that partly went
way beyond what Mastodon has today.
The oldest Fediverse project still in existence is Mike's , first released in July, 2010, then named Mistpark. A Facebook alternative. Not a Facebook
clone like Mastodon tries to be a Twitter clone. A Facebook
alternative that does a whole lot differently from Facebook, but better than Facebook, and that can also do a lot of things Facebook can't.
Friendica used to be based on its own DFRN protocol until 2023 AFAIK, but one of its killer features has always been that it speaks a whole lot of languages and connects to a whole lot of platforms, including OStatus and ActivityPub which it is based on today.
Friendica took over a few features from Facebook which are outright inconceivable in today's Twitter-clone-centred Fediverse. One is that it doesn't have Twitter's dichotomy of followers and followed. It has Facebook's concept of friends, connections that are always mutual, follower and followed at the same time, only that it doesn't refer to them as "friends" because, frankly, that term is stupid.
Another one is an actual conversation model, the same as Facebook and Tumblr and Reddit and all blogs out there. On Friendica, threads aren't loosely tied together from posts and more posts, nor do they rely on visible mentions. On Friendica, like on Facebook, on Tumblr, on Reddit and on blogs, each thread has exactly one post, the start post, and all replies are comments as opposed to posts.
This also changes a lot what you receive. Not just singular posts. Instead, you receive posts with their entire conversations, including comments from after you've received the post. Regardless of whether they mention you, regardless of whether you're connected to the commenter. In fact, on the other hand, you do
not receive singular comments by your connections. All this may be unimaginable to those who only know Twitter and Mastodon, too.
But Friendica is still harmless to what Mike did later. In 2011, one issue of decentralised networks became apparent: If the instance you're on shuts down, you lose everything. And Friendica nodes often shut down, sometimes even without prior warning. People kept having to start over.
So Mike invented something to overcome this issue: nomadic identity. Not only did it make moving from one instance to another easier than anything else that exists even today, but even that's just a byproduct of the actual killer feature of nomadic identity: Your identity can reside on multiple instances at the same time, not as dumb copies, but as clones, as bidirectional, real-time, live, hot backups of each other.
This very idea sounds like absolute utopic science-fiction in 2024. But this, along with Mike's Zot protocol which was the first to make it real, came to exist as early as 2011. And it was in 2012 that Mike actually implemented it in software for the first time: a fork of his very own Friendica with its backend re-written against Zot in its entirety, first called Red, then renamed the Red Matrix.
It introduced something else that's completely and utterly unimaginable in most of today's Fediverse: an elaborate permissions system. One that did not rely on everything else out there having the same permissions system, much unlike what Mastodon keeps trying to do.
All kinds of interactions require separate permissions, such as:
- seeing my public profile
- receiving my posts (this permission is granted when I confirm a connection)
- sending me posts
- seeing my stream
- liking, disliking or commenting on my posts
- sending me DMs
- etc.
Better yet, these permissions aren't granted to everyone all the same, but different sets of permissions can be granted to different subsets of my connections.
In 2015, the Red Matrix became which still exists today (I'm writing to you from it), and which is even more powerful. It is usually referred to as a "decentralised social CMS" because it's Facebook meets WordPress the blogging platform meets WordPress the CMS meets Google Cloud Services (and I'm not even kidding) meets some other stuff. If something truly is the Swiss army knife of the Fediverse, it's Hubzilla.
Hubzilla is still based on Zot6. It can optionally speak a whole lot other things like ActivityPub or diaspora*.
But 75% of all Fediverse users have never even heard of Hubzilla as a poll has revealed not long ago. This is "the vast fediverse that exist outside Mastodon" that Mike is speaking of.
Also part of this "vast fediverse" are those of Mike's later works that still exist today.
There is from 2021. What's in this repository is a more modern fork of a fork of three forks (of a fork) of a slimmed-down fork of Hubzilla. Mike intentionally gave it no name (again, I'm not kidding), no brand identity, almost no nodeinfo code and released it into the public domain right away, except for 3rd-party contributions which still have their own licenses. It's colloquially referred to as (streams).
It's back to Facebook-style social networking with the usual side of full-blown long-form blogging and also keeping Hubzilla's WebDAV, CalDAV and CardDAV server functionality, and it only speaks its own Nomad (a Zot successor), Hubzilla's Zot6 and ActivityPub, and the latter can still be turned off.
(streams) basically came as not only an alternative to Mastodon and many other Fediverse projects, but especially as an "anti-Mastodon" that'd provide people with all the features Mastodon should have but doesn't while not being as hard to get into as Hubzilla. Also, it was intended to be forked and used as a base for all kinds of other Fediverse creations while being impossible to legally turn into something proprietary and closed-source due to the mish-mash of licenses in 3rd-party components.
But even fewer people seen to know (streams) than Hubzilla. Even most Hubzilla users only know it from hearsay, and on Mastodon, it's even more obscure. There have never been more than seven public instances with at least partially open registration it's down to three plus two whose closure has been announced. All in all, (streams) which could have solved a whole lot of problems that plague Mastodon and its users probably has fewer than two dozen users now.
This is a problem because Mike has retired from Fediverse development. His rate of contribution has slowed down a whole lot, and he said that the streams repository is basically up for grabs, and so is his most recent creation, the still experimental from August which is (streams) without Zot6 and without Nomad, but with nomadic identity via ActivityPub.
From such a tiny user base, it's practically impossible to recruit new developers. And (streams) and Forte
do need developers and maintainers, and they need devs and maintainers who know a thing or two about their philosophy and their technology. Other than knowing that they exist and knowing PHP, that is.
It isn't like the Fediverse doesn't have developers. But everyone just keeps creating new iOS apps for only Mastodon, new websites to help Mastodon users (often with questionable compatibility with anything that is not Mastodon or none at all), new Misskey or Firefish forks and new Reddit clones (because they've heard of neither Mbin nor PieFed, much less Sublinks).
Nobody seems to care for the really good far-out stuff far out there. That's because it's too far out there for anyone to even have heard about it. But even if it wasn't, I'm almost afraid it's too far out for a Fediverse that takes Mastodon for the ActivityPub reference implementation and the origin of it all to really comprehend.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Friendica #
Hubzilla #
Streams #
(streams) #
Forte #
FediverseDevelopers Find the latitdue and longitude of any place -billed-curlew Decentralised identity has been available for longer than Mastodon, let alone ActivityPub. Only that it is known as "" here.
It was first implemented by creator
Mike Macgirvin in the Zot protocol in 2011 and in a Friendica fork named Red in 2012, later renamed into the Red Matrix, eventually reworked and renamed into in 2015.
Proof: This Hubzilla channel of mine actually simultaneously resides on two servers.
(Almost) everything that Mike has made afterwards, forks and forks of forks of Hubzilla, used to have or still have nomadic identity implemented.
His contains a fork of a fork... of Hubzilla that intentionally has no name, and that offers nomadic identity via the Nomad protocol with better compatibility with non-nomadic ActivityPub. In July, it had decentralised IDs as per (see also ) implemented, a first step by Mike to fully implement nomadic identity in ActivityPub.
, Mike's most recent fork from August, had all support for Nomad and Zot6 removed and only uses ActivityPub anymore while still offering nomadic identity. To my best knowledge, however, it has yet to be declared stable enough to be daily-driven, and it has no public instances.
Other than all this, a non-public development version of 's has nomadic identity via ActivityPub in development. I'm not sure whether FEP-ef61 is implemented in the release version yet. It's the only Fediverse project aiming to implement nomadic identity which Mike Macgirvin has nothing directly to do with.
The ultimate goal is to be able to clone a Fediverse identity across project borders. Only considering stable releases, it's currently only possible to clone Hubzilla channels within Hubzilla, using Zot6, or (streams) channels within (streams), using Nomad.
Unfortunately, Mike has officially retired from Fediverse development and only occasionally submits code to the streams repository and Forte anymore.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
DecentralizedIdentity #
NomadicIdentity #
ActivityPub #
FEPef61 #
Zot #
Zot6 #
Nomad #
Hubzilla #
Streams #
(streams) #
Mitra Warum von auerhalb von Mastodon meistens keine Alt-Texte kommen CW: lang (fast 3.500 Zeichen), Fediverse-Meta, Fediverse-ber-Mastodon-hinaus-Meta, Alt-Text-Meta, Bildbeschreibungen-Meta Artikel ansehen Zusammenfassung ansehen
Na ja, die eine Sache dabei ist, da Alt-Text nur auf Mastodon ein groes Thema ist und berall sonst nicht. Das Fediverse lebt ja nicht bis in den hintersten Winkel die Mastodon-Kultur und ausschlielich die Mastodon-Kultur. Und das wird es auch nie, weil Mastodons Kultur den technischen Features im Rest des Fediverse widerspricht.
Nicht nur Newbies wissen nichts von Alt-Text. Wenn z. B. jemand 2011 auf diaspora* angefangen hat, 2013 nach Friendica gewechselt ist und seit 2019 auf Hubzilla ist, dann sind das 13 Jahre im Fediverse, lnger, als Mastodon berhaupt existiert. Aber es kann durchaus sein, da so jemand nie mit Mastodons Kultur in Berhrung gekommen ist.
Wenn er dann von einem Mastodon-Nutzer angeschnauzt wird, was er doch fr ein ableistisches Dreckschwein ist, da er keine Alt-Texte schreibt, dann wei er erstmal gar nicht, worum es eigentlich geht. Mit Recherche und Nachfragen ergibt sich dann das Bild von irgendeiner komischen Mastodon-Modeerscheinung. Und was von Mastodon kommt, mu man ja auf etwas, das lter ist als Mastodon (Hubzilla ist von 2015), nicht nachffen. Das ist dann die Reaktion.
Die andere Sache ist: Friendica und Hubzilla haben nicht so ein nettes Alt-Text-Eingabefeld wie Mastodon. Um da einen Alt-Text an ein Bild zu hngen, mu der Einbettungs-BBcode hndisch umgebaut werden. Zum Beispiel von
zrl=https://hub.netzgemeinde.eu/photos/jupiterrowland/image/20a21c8e-9fdd-444c-b3bd-e19f0125c7adzmg=490x368https://hub.netzgemeinde.eu/photo/20a21c8e-9fdd-444c-b3bd-e19f0125c7ad-2/zmg/zrl
nach
zrl=https://hub.netzgemeinde.eu/photos/jupiterrowland/image/20a21c8e-9fdd-444c-b3bd-e19f0125c7adzmg=https://hub.netzgemeinde.eu/photo/20a21c8e-9fdd-444c-b3bd-e19f0125c7ad-2Screen capture from the animated television series Fairly Oddparents, edited into an image macro. It shows a facial close-up of Mr Turner, lead character Timmy Turner's father. He is speaking in anger with his eyes squinted and directed to one side. The image has a large piece of text at the bottom which reads, Gargron./zmg/zrl
Auf Hubzilla ist das noch nicht mal dokumentiert da erfhrt man hchstens durch Hrensagen, wie das geht. Das heit, wenn es da keinen Alt-Text gibt, kann es auch daran liegen, da jemand nicht wei, da man auf Hubzilla berhaupt Alt-Texte eintragen kann, geschweige denn wie. Und auf Friendica ist das mglicherweise sogar verbuggt, was keiner meldet, weil es keiner nutzt.
(Fr die Akten: Ich beschreibe alle meine Bilder und erklre sie sogar. Und zwar seit mehr als einem Jahr so aufwendig und detailliert, da ich pro Bild je nach Bild auch mal von mehreren Stunden bis zu zwei Tage brauche, vorherige Planung nicht inbegriffen. Mein Problem ist eher, da es keine Mglichkeit gibt, im greren Rahmen mit anderen Fediverse-Nutzern zu diskutieren, wie Bildbeschreibungen in extremen Randsituationen am besten auszufhren sein sollten. Ich wrde auch gern berall, wo es ntig ist, CWs im Mastodon-Stil einsetzen, aber bei Kommentaren drfte das hier auf Hubzilla, wo es bei Kommentaren kein Eingabefeld dafr gibt, immer noch verbuggt sein siehe ganz oben.)CC:
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
FediverseKultur #
MastodonKultur #
NichtNurMastodon #
Friendica #
Hubzilla #
AltText #
AltTextMeta #
CWAltTextMeta #
Bildbeschreibung #
Bildbeschreibungen #
BiBesch #
BildbeschreibungenMeta #
CWBildbeschreibungenMeta Little nitpick maybe from someone who takes image descriptions very seriously: I myself would
never explain something in alt-text.
That'd mean that there's information exclusively only in the alt-text and neither in the post text nor in the image. Those who can't access alt-text, and there can be a lot of reasons for this, have no access to this extra information. For this reason, I always put explanations into the post itself.
But then again, I don't have a character limit to worry about whereas at least half the Fediverse has no more than 500 characters.
I also take content warnings seriously, and I try to cater to the needs of those who have automated reader-side generation of content warnings (Friendica users, Hubzilla users, (streams) users, Mastodon users who know that Mastodon 4.x has this feature see the many hashtags below) as well as those who rely on poster-supplied content warnings in the summary field (everyone else). I might irritate the latter because I still use the summary field for summaries as well.
It got to the point at which I've stopped posting images that could
possibly trigger someone on Hubzilla and created two (streams) channels for them because Hubzilla can't make Mastodon blank out images, but (streams) can. I know that Mastodon 4.3 finally hides images behind CWs, but I think it'll be years until no instances run Mastodon older than 4.3 anymore.
I also feel like halfway into trouble because Hubzilla doesn't let me add Mastodon-style CWs to comments (= replies). It doesn't have a text field for them because that'd be akin to being able to hiding a blog comment behind a summary. And the
summary/summary
BBcode tags work on (streams) when going out via ActivityPub, but not on Hubzilla.
Some may say I overdo content warnings, and I overdo image descriptions even more. But without enough feedback, I prefer to err on the "too much" side.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
CW #
CWs #
CWMeta #
ContentWarning #
ContentWarnings #
ContentWarningMeta #
Hubzilla #
Streams #
(streams)Dolphins In Depth: Dolphins need to be relevant in December to avoid another reset
However. Ich wnsche mir mehr Hinweise darauf, bei welchen Systemen im fediverse die Migration von Accounts untersttzt wird und funktioniert.
Wer schon einmal wechselte weiss was ich meine.
Zuviel Blauugigkeit was geht und die Dokus dazu klren kaum auf .
Kompletten Umzug mit allem Drum und Dran, also
- Posts, Kommentare, DMs
- Kontakte (also Followers und Followed)
- Dateien
- Einstellungen
- Filter
- usw. usf.
gibt's eh nur innerhalb von Hubzilla und innerhalb von (streams). Das Wunder der nomadischen Identitt, das aber ein bichen sehr dadurch gehemmt wird, da Hubzillas Zot6 und (streams)' Nomad zueinander kaum mehr kompatibel sind. Das heit, mit hndischem Export und Import kann man auch von Hubzilla nach (streams) ziehen, aber nicht umgekehrt.
Selbst nomadische Identitt hat natrlich ihre Grenzen. Wenn ich innerhalb von Hubzilla umziehe, werden meine Follower-Kontakte auf Instanzen, die ihrerseits nomadische Identitt kennen, automagisch auf meine neue ID umgeschrieben. Also bei meinen Hubzilla- und (streams)-Kontakten. Aber eben nur da und nicht z. B. auf Mastodon, das nomadische Identitt nicht kennt.
Nomadische Identitt via ActivityPub ist ein Konzept, aber eins in einem sehr frhen Entwicklungsstadium, und es steht im Grunde nur Entwicklern zur Verfgung. Geplant ist eigentlich irgendwann mal die nomadische Vernetzung
aller Fediverse-Projekte, so da man zwischen allen umziehen und klonen knnte. Aber der Entwickler, der die Sache am ehesten vorangetrieben hat, ist seit Ende August im Fediverse-Entwicklungs-Ruhestand. Und selbst wenn das mal stabil ausentwickelt sein sollte: Bis Mastodon das implementiert, friert die Hlle zu.
CC:
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Hubzilla #
Streams #
(streams) #
NomadischeIdentittPipeline release! nf-core/scnanoseq v1.0.0 - nf-core/scnanoseq v1.0.0 - Titanium Toad!
Please see the changelog:
-read-sequencing -seq -cell
How Long Have You Been Bally
Bei unsereins kommt ja noch etwas erschwerend hinzu: Friendica und Hubzilla haben keine Trennung in Folgende und Gefolgte. Das ist hier wie auf Facebook alles eins zusammen. Es gibt eigentlich nur beidseitige Verbindungen.
Will sagen, wenn einem jemand von Mastodon folgen will, dann geht das nur in Form einer beidseitigen Verbindung, also Zurckfolgen. Und so "folgt" man irgendwann haufenweise Leuten, die eigentlich gar nichts Interessantes zu posten haben und einen eigentlich nur zumllen, nur damit die einem folgen knnen.
Was soll man da machen Sich zumllen lassen Verbindung wieder lschen Dann knnen sie einem nicht mehr folgen, und sie merken das. Blockieren Dann knnen sie einem auch nicht mehr folgen, und sie merken das. Stummschalten Dann knnen sie einem berhaupt nichts mehr schicken, auch keine Kommentare oder DMs.
Da bin ich froh ber Hubzillas Berechtigungseinstellungen. Da kann ich nur denjenigen erlauben, mir ihre Posts zu schicken, von denen ich Posts sehen will. Kommentare und DMs werden separat geregelt. Dazu habe ich dann auch noch Filter pro Kontakt. Die einzigen, die das merken, sind die, die Zustellungsberichte haben. Mastodon hat die nicht.
CC:
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Friendica #
Hubzilla #
BerechtigungenMastodon is increasingly moving from #
NoAltTextNoBoost to #
CrappyAltTextNoBoost, and I can see it move further to #
NotEnoughAltTextNoBoost.
It is moving from only ostracising people for not providing image descriptions past ostracising people for providing useless image descriptions towards ostracising people for providing AI-generated image descriptions because they're at least partially wrong. The next victims may be people whose image descriptions leave out elements in the image which others may deem necessary to describe.
As quality requirements for image descriptions are being raised, I can't possibly lower the quality of my own image descriptions. If anything, I'll continue to upgrade my own image descriptions to stay ahead.
This is also why I'm worried about moving the long descriptions from the post text body into linked external documents. Not having certain descriptions and any explanations anywhere in the post anymore may backfire, and the external documents themselves may not be accessible and inclusive after all.
Interestingly, this is not congruent with what I read from actually non-sighted people. They don't even seem to care for accuracy which they can't verify anyway as long as the image description is funny and/or whimsical. Since it seems to be exactly that what AI delivers, it's no wonder that many blind people prefer image descriptions from BeMyAI over image descriptions from human experts.
I think I'll keep on writing my monster descriptions, two for each original image. If any of you who aren't sighted don't like them for not being whimsical enough, feel free to ignore the hours or days of work I've put into them, fire up your AI and have your own image description generated.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Mastodon #
MastodonPolice #
FediPolice #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Blind #
VisuallyImpaired #
A11y #
Accessibility #
AI #
AIVsHuman #
HumanVsAI Embrace-Extend-Extinguish.
Embrace = they join the Fediverse.
Extend = they introduce stuff that's non-standard and not really compatible with what already exists. Everything else has to adopt that non-standard stuff or risk becoming incompatible. Since they're one of the biggest players or
the biggest player, they can get away with it.
Threads has yet to do this. Mastodon has been doing it all the time.
Extinguish = They decide they don't care for interoperability anymore. They become incompatible with everything else. As big as they are, they get away with it. But they basically kill everything else off because all that "everything else" can hardly survive without being connected to that one big player.
Mastodon has more power to do this than Threads because Threads is a late addition to the Fediverse, Threads isn't even really welcome in the Fediverse, only a tiny fraction of Threads even federates, and I guess that to more people than not,
Mastodon is the Fediverse.
(Public reply because the original post is public. I'm on Hubzilla and not on Mastodon, and on Hubzilla, all replies always have the exact same permissions as the original post. Permission changes within a thread are not supported.)
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
EEE #
EmbraceExtendExtinguishSo you think AI is always better at describing and even explaining
any image out there than
any human, including insiders and experts on the very topic shown in the image 100% accurately And at a higher level of detail than said humans
Well, .
I'd like to see AI identify the place shown in the image as the central crossing at BlackWhite Castle and identify BlackWhite Castle as a standard-region-sized sim on Pangea Grid, a virtual world or so-called "grid" based on OpenSimulator.
I'd like to see AI explain the above, all the way down to a level that can easily be understood by someone who has only got a rough idea about what virtual worlds are.
I'd like to see AI correctly mention the pop-cultural links from this sim to German Edgar Wallace films and Frhstyxradio that are obvious to me.
I'd like to see AI correctly identify the avatar in the middle. By name. And know that identifying the avatar is appropriate in this context.
I'd like to see AI know and tell the real reason why the avatar is only shown from behind.
I'd like to see AI recognise that the image was not edited into monochrome, but it's actually both the avatar and the entire sim with everything on and around it that's monochrome.
I'd like to see AI transcribe text that's unreadable in the image. 100% accurately verbatim, letter by letter.
I'd like to see AI identify the object to the right and explain its origin, its purpose and its functionality in detail.
I'd like to see AI discover and mention the castle in the background.
I'd like to see AI accurately figure out whether it's necessary to explain any of the above to the expected audience and, if correctly deemed necessary, do so. And explain the explanations if correctly deemed necessary.
I'd like to see AI know and correctly mention which direction the camera is facing.
Finally, I'd like to see AI automatically generate
two image descriptions, a full and detailed one with all explanations and a shorter one that can fit into 1,500 characters minus the number of characters necessary to mention the full description and explain its location.
, I did all of the above. And more.
In fact, if AI is supposed to be better than me, I expect it to identify
all trees in the image, not only the mountain pines, to give an even more detailed description of the motel advertisement and to give a much more detailed description of the map, including verbatim transcripts of all text on it and accurate information on what place is shown on the map in the first place.
If AI is supposed to be better than me, I expect it to
- describe, explain and transcribe everything that I describe, explain and transcribe
- describe, explain and transcribe even more on top of that
- even more accurately than I do
- more whimiscally
- and in much fewer characters.
All, by the way, fully automatically with no human intervention except for maybe a simple prompt to describe the image for a certain Fediverse project.
#
Long #
LongPost #
CWLong #
CWLongPost #
OpenSim #
OpenSimulator #
Metaverse #
VirtualWorlds #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
A11y #
Accessibility #
AI #
AIVsHuman #
HumanVsAIAllegedly, a "good" advice for image descriptions is always to describe images like you'd describe them to someone on a landline phone.
Sorry, but that's non-sense. At least for anything that goes significantly beyond a real-life cat photo.
If you describe an image through a phone, you describe it to
one person. Usually a person whom you know, so you've at least got a rough idea on what they need described. Even more importantly, you can ask that person what they want to know about the image if you don't know. And you get a reply.
If you describe an image for a public Fediverse post, you describe it to
millions of Fediverse users and
billions of Web users. You can't know what they all want, nor can you generalise what they all want. And you can't even ask one of them what they need described before or while describing, much less all of them. In fact, you can't ask at all. And yet, you have to cater to everyone's needs the same and throw no-one under a bus.
If I see a realistic chance that someone might be interested in some detail in one of my images, I will describe it. It won't be in the shorter description in the alt-text instead, it will be in the long description which I've always put directly into the post so far, but whose placement I'm currently reconsidering. If something is unfamiliar enough to enough people that it requires an explanation, I will explain it in the long description.
Right now, only meme posts are an exception. They don't need as much of a visual description as long as I stick to the template, and a poll has revealed that people do prefer externally linked third-party explanations over my own ones blowing the character count of the post out of proportion. This is the one time that I can safely assume that I actually
know what most people want.
#
Long #
LongPost #
CWLong #
CWLongPost #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Inclusion #
A11y #
AccessibilityPipeline release! nf-core/mag v3.1.0 - mag 3.1.0 - Pink Panda!
Please see the changelog:
-read-sequencing -sequencing
To put it bluntly, Solid is Tim Berners-Lee's attempt at re-inventing wheels named Hubzilla and nomadic identity.
Unlike Solid, Hubzilla is neither a vague concept, nor is it in its earliest alpha stages of development. Hubzilla was launched as early as 2015, it has just released version 9.4.1. It is based on something called the Red Matrix from 2012. The idea of nomadic identity itself is from 2011. So it's a tried-and-true solution and well-established. Not to mention that all this was created by a
protocol developer,
Mike Macgirvin .
When Tim Berners-Lee invented Solid, he didn't know about the Red Matrix or Hubzilla or the concept of nomadic identity because none of that stuff had been advertised anywhere.
He has learned about Hubzilla and nomadic identity in the meantime. But now it's too late to stop, abandon Solid and support Hubzilla and its successors instead. So he keeps pushing Solid forward, knowing pretty well that he's directly competing against Hubzilla and its successors, the streams repository from 2021 and Forte which was created in July.
Truth be told, Solid can't hold a candle to Hubzilla and its successors. Hubzilla and (streams) are both still based on protocols of their own while also being able to communicate through ActivityPub optionally. But on (streams), nomadic identity via ActivityPub seems to be already implemented. And Forte only uses ActivityPub anymore while still being fully nomadic.
And so it can only survive by advertising it to the general public before they learn about Mike Macgirvin's creations. And about 75% of the Fediverse has never even heard of the name "Hubzilla", including many developers. The successors are even more obscure.
Unfortunately, Mike Macgirvin officially retired from Fediverse development on August 31st. While he is still occasionally submitting code to both the streams repository and Forte, the times when he worked on them just about everyday are over. And unlike Hubzilla which has been passed on to the community in 2018, the streams repository is struggling to find new main maintainers, Forte even more so because it doesn't have any users yet AFAIK.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Solid #
RedMatrix #
Hubzilla #
Streams #
(streams) #
Forte #
NomadicIdentity It's in maintenance mode, .
I believe so. Last I heard someone inherited the code base and its repo, without necessarily wanting to, and recently shut it down (IIRC). Meanwhile the old flagship instance and domain firefish dot social has been dead a long time.
Yup. Kainoa, original developer of Firefish, dropped the project into 's lap, but none of the official servers.
- The old repository is dead Naskya copied everything over to .
- The old official lighthouse instance is dead Naskya set up instead.
- The old official website is dead Naskya did not replace it.
Calckey died when Kainoa turned it into Firefish.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Calckey #
Firefish