Hat Friendica aber wieder ein dediziertes Zusammenfassungs-Textfeld Die alte Dreifaltigkeit Titel, Zusammenfassung, Post
Ich schrieb ja ber Mastodon-Apps und Friendica, da eine Mastodon-App irgendwas braucht, was genauso funktioniert wie Mastodons CW-Feld. Etwas, wo sie verllich immer eine CW reindrcken kann, so da die dann genauso verllich wie auf Mastodon als CW funktioniert. Und zwar, ohne auf irgendwelche projektspezifischen Spezialitten eingehen zu knnen.
Die App mu Friendica immer 100% verllich blind wie Mastodon "bedienen" knnen.
Natrlich kann man auf Friendica den Titel als Zusammenfassung/CW nehmen. Aber zum einen mu das eingestellt werden. Zum anderen mu die App dann die CW ins Titelfeld eintragen. Eine Mastodon-App wird aber stur versuchen, die CW ins Zusammenfassungsfeld zu drcken, ob da jetzt eins ist oder nicht.
Wenn sie jetzt die CW ins Titelfeld eintragen soll, dann ist das ein Friendica-Spezialfeature. Wenn sie selbstttig aus den Einstellungen erkennen soll, ob die CW in den Titel eingetragen oder z. B. als BBcode realisiert werden soll, ist das erst recht ein Friendica-Spezialfeature.
Ich wei nicht, vielleicht kann Fedilab das. Aber die ganzen praktisch reinen Mastodon-Apps, die von Leuten entwickelt werden, die ihren Lebtag von Friendica noch nie auch nur gehrt haben, die knnen das nicht.
Eine rein gegen Mastodon entwickelte App wird wiederum fr Alt-Text ein Textfeld anbieten, das den Alt-Text in ein Bild eintrgt, das an einen Post als Datei angehngt ist. So luft das nmlich auf Mastodon und nur so.
Mein letzter Informationsstand ist, da genau das aber auf Friendica nicht geht. Wenn man da Alt-Text haben will, mu man das Bild erst hochladen, dann in-line in den Post einbetten und dann den Alt-Text in den BBcode eintragen.
Theoretisch ginge das auch mit einer App. Aber das wre wieder ein Friendica-Special-Feature. Und eine reine Mastodon-App von einem Entwickler, der Friendica berhaupt nicht kennt, untersttzt keine Friendica-Special-Features, sondern wirklich nur und ausschlielich Sachen, die Mastodon kann und die Mastodon genau so macht.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
CW #
CWs #
CWMeta #
ContentWarning #
ContentWarnings #
ContentWarningMeta #
AltText #
AltTextMeta #
CWAltTextMeta #
Friendica #
MastodonApp #
MastodonAppsNach diesem Muster ist er auch bei Friendica und Hubzilla vorgegangen.
Na ja, es war hnlich.
2012 war Friendica lngst stabil und im Grunde fertig. Er hat es an die Community abgegeben, Red abgeforkt und mit Zot experimentiert.
2018 war Hubzilla stabil und im Grunde fertig. Er hat es an die Community abgegeben, Osada und Zap abgeforkt und mit Zot6 experimentiert.
2020 war Zap stabil und im Grunde fertig. Er hat es an die Community abgegeben und das zweite Osada gleich mit. Nachdem die Community umgehend Osada eingestellt hat, weil es eh mit Zap beinahe identisch war, hat Mike ein drittes Osada, ein neues Mistpark und eine neue Redmatrix abgeforkt, um mit Zot8 zu experimentieren.
Aus den Experimenten ging nie etwas Stabiles hervor. Statt dessen hat er von einem von den dreien 2021 Roadhouse geforkt, um mit der nchsten Zot-Evolutionsstufe zu experimentieren, die dann in Nomad umbenannt wurde.
(streams) aus demselben Jahr sollte dann Roadhouse in stabil werden. Und Mike wollte (streams) nicht wieder forken. Dann kam Mike aber an einen Punkt, wo er sagte: Nomadische Identitt geht auch mit ActivityPub. Ich brauche kein eigenes Protokoll mehr, ich mu nur dabei mithelfen, ActivityPub dahin zu bringen, da es Nomad ersetzen kann.
Weil er aber (streams) nicht forken wollte, hat er das Ganze auf (streams) selbst versucht umzusetzen. Blderweise luft das in der Praxis nicht so geschmeidig, wie es in der Theorie angedacht war.
Statt jetzt aber seinen einzigen stabilen Release endgltig in eine Bastelbude zu verwandeln, hat er jetzt Forte abgeforkt und nimmt das zum Basteln, whrend (streams) wieder auf stabile Beine kommen soll. Auch das macht er selber, weil das keiner fr ihn bernimmt. Und die (streams)-Community ist keine drei Jahre nach der Entstehung von (streams) noch zu klein, um so bald die Entwicklung von (streams) zu bernehmen. Kaum einer zieht von Hubzilla um, ganz neu nach (streams) kommt eh keiner, auf Mastodon wei kaum einer, da es (streams) gibt, und die, die davon wissen, trauen sich nicht hin.
Und so wird Mike beides weiterentwickeln. Forte wird wahrscheinlich zunchst ein Soft Fork bleiben, damit Mike sich nicht dieselbe Arbeit zweimal machen mu.
So gesehen ist das eher vergleichbar mit Zap und den ersten zwei Osadas, wo Mike schon mal zwei Projekte mit in Teilen unterschiedlicher Codebase am Laufen hatte.
CC:
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Friendica #
Red #
RedMatrix #
Hubzilla #
Osada #
Zap #
Mistpark2020 #
Misty #
Redmatrix2020 #
Roadhouse #
Streams #
(streams) #
Forte Momentan entwickelt Mike beide aktiv weiter.
Es kann gut sein, da er (streams) erstmal zu Stabilitt bringen will und Forte hat, um damit mit neuen Sachen zu experimentieren. Ich wei es nicht, aber es kann gut sein, da Forte der Versuch wird, erstmals komplett ohne eigenes Protokoll auszukommen. (streams) ist ja das Ende einer Kette von Weiterentwicklungen von Zot. Forte knnte das nchste Protokollexperimentierfeld sein: dieses Mal alles nur noch mit ActivityPub. Dann mu er dafr nicht (streams) nehmen.
Das wre im Prinzip wie 2018, nachdem er Osada und Zap abgeforkt hat, wenn er damals auch offiziell der Hubzilla-Maintainer geblieben wre.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Hubzilla #
Osada #
Zap #
Streams #
(streams) #
Forte Okay, let me demonstrate it this way. I hope Glitch can at least display multiple-line code blocks. If not, I give up, for there's absolutely no way of showing you what I mean.
On Twitter, Mastodon and everything else that "does microblogging", a hashtag includes the hashtag character in the link. In
#zrl=https://hub.netzgemeinde.eu/searchtag=HashtagHashtag/zrl
, everything is part of the link and part of the hashtag.
All this is a link
    
#zrl=https://hub.netzgemeinde.eu/searchtag=HashtagHashtag/zrl
Look at the hashtags that you're used to. The
#
is always part of the link and part of the hashtag.
This is what Mastodon used to expect. And this is what Glitch (which is where you are) and Iceshrimp.NET (which is where is) still expect and nothing else.
Friendica, Hubzilla (which is where I am) and (streams) work differently, also because Friendica is five and a half years older than Mastodon, Hubzilla is an indirect Friendica fork, and (streams) is an indirect Mastodon fork.
In
#zrl=https://hub.netzgemeinde.eu/searchtag=HashtagHashtag/zrl
, the
#
is decoration, not part of the link, not part of the hashtag. Only
Hashtag
is the link and the actual hashtag.
Neither Glitch nor Iceshrimp.NET can handle this. Their devs have probably never seen any of this. They neither know it exists, nor do they even only expect it to exist.
Result: Glitch "sanitises" the unknown, unexpected, "IDFK what this is" code away, just like Mastodon probably used to do until someone from Friendica or Hubzilla filed a bug on GitHub. And Iceshrimp.NET doesn't know how to handle this unexpected code at all. It fails ungracefully by going completely haywire.
I'm going to file a bug on the Glitch repository now. For Iceshrimp.NET, I'll need an account on its repository.
The following hashtags are only for discovery purposes and for sensible users to filter this comment out.#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Glitch #
GlitchSoc #
GlitchSoc #
Iceshrimp #
Iceshrimp.NET #
Friendica #
Hubzilla #
Streams #
(streams) #
Hashtag #
Hashtags #
HashtagMeta #
CWHashtagMeta I don't think (streams) or Forte will introduce the Mastodon API. (streams) definitely doesn't support it, and I'd be very surprised if the Mastodon API was the first thing Mike has added to Forte in the three weeks after the fork.
For one, Mike wants to keep his current developments devoid of any non-standard, Mastodon-only technology. He wants to be wholly independent from Gargron's decisions.
Besides, it'd be utterly useless. All apps out there that support the Mastodon API also only support the Mastodon 3 feature set. If you're very lucky, they've added a thing or two from Mastodon 4.
But what Mastodon can do, and what these apps can control, is only a tiny fraction of what even Friendica is capable of, much to the dismay of those who try to use Friendica with Fedilab.
You can't format your text because you can't format text on Mastodon. Or if you're semi-lucky, you can add text-formatting BBcode (or Markdown) manually, and the app will not go haywire, but you can neither preview your post because Mastodon has no preview feature, nor can you see the formatting after sending the post because the app still doesn't support the text formatting introduced with Mastodon 4 last year already.
You can't add a title because Mastodon doesn't support the creation of titles.
You can't add a CW. Friendica has no dedicated CW field, it has removed the abstract field that used to be the same thing, and probably no Mastodon app translates the content of its CW field into
abstract=apub<Here goes the CW>/abstract
BBcode.
You can't add alt-text. Friendica doesn't support the Mastodon way of alt-text (adding it to an image that's attached as a file), and Mastodon doesn't support the Friendica way of alt-text (grafted into the in-line image-embedding BBcode) because Mastodon already refuses to display in-line images.
I guess some apps even have a hard limit of 500 characters, Mastodon-style counting including the CW field and counting links as 23 characters each. They won't even let you type more. Friendica is unlimited in character count.
And all this is just
basics on Friendica.
Now remember that (streams) is a fork of a fork of three forks of a fork (of a fork) of a fork (Hubzilla) (of a fork) of Friendica. And Forte is a fork of (streams).
(streams) is a whole lot more powerful and complex than Friendica, not to mention Mastodon. It has even more features which a typical Mastodon app doesn't support, but which are absolutely essential in using (streams).
A Mastodon app supposes that identities work like Mastodon everywhere: Your account is your identity. You have exactly one identity on your account. You have that identity on exactly one account and exactly one instance. Your identity is firmly tied to your home instance.
Thus, a Mastodon app would even fail to log into anything that was ever forked from Friendica or any one of its direct or indirect forks, save maybe for the first Osada. From Red of 2012, directly forked from Friendica, to Hubzilla from 2015, to now-defunct, Zap from 2018, to (streams) from 2021, to Forte from 2024, plus all experimental forks in-between, they all support (except the first Osada) and .
This means: Your account is
not your identity. It's just the access to one or multiple identities of yours on the same instance. Your
channel is your identity, and you can create any number of channels on the same account. Also, and this is the nomadic part, you can have the same channel, the same identity, simultaneously on multiple instances with one account each.
A Mastodon app couldn't even deal with one login, but multiple IDs. If you had multiple channels, and you were lucky enough to connect
one, namely your main channel, you'd be unable to connect any of your other channels. Even by logging out and back in.
And even if you could connect (streams) to a Mastodon app, essential features such as per-post permission control couldn't be used because the app doesn't have them implemented, because Mastodon doesn't have these features.
Mastodon supports:
- Public (default)
- Unlisted
- Only to your followers
- Only to mentioned accounts
Every last Mastodon app is hard-coded to these four and only these four options.
(streams) supports:
- Public
- Only to your connections (usually the default)
- <Insert all your lists here, one by one, because (streams) can post to these individually>
- Only to yourself
- An individual choice of connections and lists which you can check and uncheck
Some apps may also be able to control some configuration. However, they only have Mastodon options implemented. Almost all these buttons would be functionless with a (streams) channel connected. At the same time, 99% of all (streams) configuration would be inaccessible in a Mastodon app, simply because Mastodon doesn't have these features and options.
So even if you manage to send some simple, tweet-like posts through a Mastodon app, you
will need the Web interface for absolutely everything else. And that will be a lot.
(streams) works perfectly as a Progressive Web App (PWA), it's designed for that, and that's actually the officially recommended way to use it on a phone. Same will most likely go for Forte.
If you really want a dedicated, non-PWA app for iOS and Android, that app would be a monster. An absolute unit. Similar to K-9 Mail in complexity, if not even worse.
Worlds beyond
any Mastodon app. That's because the app would have to be able to control the whole thing, not just the essentials. Besides, who gets to decide for everyone what's essential
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Mastodon #
MastodonApp #
MastodonApps #
MastodonAPI #
Streams #
(streams) #
Forte In this case, the handling of long-form content should be configurable per account.
The settings could be:
- Render fully (default what the devs of Hubzilla and (streams) want)
- Always link (what Mastodon does with Article-type objects)
- Link if title or subject is available, otherwise render fully
- Link if title or subject is available, otherwise block
- Block everything over 500 characters
Maybe there could be an additional setting for formatted content in general:
- Render fully (default)
- Render partially (the Mastodon 4 way right now)
- Remove all formatting (what Mastodon up until 3 did)
- Block
#
Long #
LongPost #
LongPosts #
LongPostMeta #
CWLong #
CWLongPost #
CWLongPostMeta #
500Characters #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Mastodon #
Friendica #
Hubzilla #
TextFormatting #
RichText The problem is most likely that Mistpark, now Friendica, introduced hashtags that do not include the "hashtag character" into the link. All of Friendica's descendants took this behaviour over, including Hubzilla and (streams).
#
HashtagTwitter/ has always included the "hashtag character" into the link. All the Twitter replacements, Mastodon, Pleroma, Misskey and their forks, have taken this behaviour over.
HashtagHowever, they largely expect or used to expect
everything in the Fediverse to do that. The way Friendica and its descendants do hashtags is unexpected and highly irritating to them. Vanilla Mastodon has largely fixed their hashtag handling only the extraction of the hashtags in the last line doesn't work properly yet.
Apparently, this fix was introduced to vanilla Mastodon
after the Glitch fork, and Glitch has never taken it over. And the Iceshrimp.NET devs seem not to know about this phenomenon either.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Hashtag #
Hashtags #
HashtagMeta #
CWHashtagMeta #
Glitch #
GlitchSoc #
GlitchSoc #
Iceshrimp #
Iceshrimp.NET #
Friendica #
Hubzilla #
Streams #
(streams) You're missing all three natively nomadic projects by Friendica creator Mike Macgirvin:
- Hubzilla
Created in March, 2015
Main repository: - technically nameless and brandless, unofficially and colloquially referred to as (streams)
Created in October, 2021
Main repository: - Forte
Created in July, 2024
Main repository:
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Hubzilla #
Streams #
(streams) #
ForteBestes Beispiel die "SocialMediaExperten" und "TechJournalisten", die sich ein Leben unter der Konzernknute gar nicht mehr vorstellen knnen und es anderen dann als das Schlaraffenland stndig anpreisen, nur weil sie dann da einen "Vorteil" htten.
Da gibt's solche und solche.
Einmal solche:
"Was ist das"
"Sowas geht doch technisch gar nicht."
"Von wann Ach, das ist doch bestimmt schon wieder tot."
"Wie, das gibt's noch Aber das benutzt doch keiner. Ich sag doch, das ist tot."
"ber zehn Millionen Komm, das ist gar nix. Das interessiert doch keinen."
Und dann gibt's noch solche, die ganz einfach nur Shills sind. Die haben vielleicht schon vom Fediverse gehrt, aber entweder erwhnen sie es nicht, oder sie machen es absichtlich und gezielt schlecht. Sonst knnten ihnen ja ihre Followerzahlen auf und Facebook und Instagram einbrechen, wenn die Leute zum Umziehen inspiriert sind.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta#friendica is like a swiss knife with manual gear, it put's you in the drivers seat of as many decisions as possible.
And Friendica isn't even
that bad anymore. It's just that people are too used to pre-Musk Twitter and their favourite Mastodon mobile app.
I haven't used Friendica since the 2010s, but today's Friendica looks more user-friendly than today's Facebook, and Friendica is meant to be an alternative to Facebook.
I mean, there's still Hubzilla, a Friendica fork by Friendica's creator, which puts
even vastly more features and options and settings than Friendica has (and how that's possible) behind Friendica's UI as it was in 2012.
Also, Friendica has out-right amazing user documentation. Hubzilla's user documentation is incomplete, what's there reads like a technical specification rather than a user manual, and parts of it are glaringly outdated, sometimes even still referring to Hubzilla as Red. At least for English-speaking folks, the most up-to-date and useful Hubzilla documentation in existence is hearsay.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
UIUX #
UserFriendly #
Friendica #
Hubzillaes gengt manchen aber trotzdem nicht, weil sie zu bequem geworden sind, etwas selber zu tun.
Die allermeisten Fediverse-Nutzer sind ab 2022 von Twitter nach Mastodon gekommen und schon auf Twitter durch den Algorithmus total verhtschelt worden. Oder sagen wir, sie glauben das, weil sie dachten, ihr Content wird tatschlich vollautomagisch an die verteilt, die das interessiert.
Als sie nach Mastodon gingen, hielten sie das fr eine noch (nutzer)freundlichere riesige Kuschelecke, zumal sie da tatschlich gefhlt
noch mehr verhtschelt wurden. Oder zumindest anders.
Statt Projektauswahl und dann Instanzauswahl hat man sie praktischerweise direkt von Twitter nach mastodon.social gerailroadet, ohne sie auch nur mit der Information zu belstigen, da es da drauen auch noch was anderes gibt. Das muten sie erst erfahren, als sie es sich auf Mastodon schon gemtlich gemacht hatten.
Und jetzt erwarten sie, da zum einen ihnen alles und zum anderen allen anderen Leuten ihr Content auf dem Silbertablett serviert wird, ohne da sie dafr einen Finger krmmen. Sich fr jeden Trt erst irgendwelche Hashtags auszudenken, ist ja viiieeel zu anstrengend.
Also nutzen sie Mastodon weiter, wie sie Twitter genutzt haben. Komplett ohne Hashtags. Und sie warten darauf, da "das Fediverse" endlich "einen Algorithmus" einfhrt, will sagen, genau den, den Twitter frher hatte.
Herrje, einige sind schon zum Selbertrten zu faul. Die haben das letzte Mal vor Wochen getrtet, wenn berhaupt. Aber sie boosten jeden Tag mindestens ein Dutzend Beitrge von anderen.
Dabei ist das eine Selbstermchtigung, selber die Kontrolle haben zu knnen. Das muss man aber auch erst einmal wieder lernen und es auch wollen
Und genau das will keiner. Sieht man ja auch in anderen Bereichen.
Beispielsweise Inhaltswarnungen. Auf Friendica werden die optional fr jeden User individuell automatisch von einem total simplen Textfilter generiert. Hubzilla hat das von Friendica geerbt, (streams) indirekt von Hubzilla, und es mte mit dem Teufel zugehen, wenn das brandneue Forte das nicht mehr htte. Und selbst Mastodon hat 2023 seine Filter so erweitert, da sie Inhaltswarnungen generieren.
Das interessiert nur keinen. Nicht nur wei auf Mastodon keine Sau, da Friendica und Nachfahren das haben, weshalb sich Mastodon-Nutzer gerne aufregen ber die fehlenden CWs und Massen an Hashtags von da. Nein, es wei auch kaum jemand, da Mastodon selbst inzwischen diese Funktionalitt hat. Das drfte allerdings auch daran liegen, da fast alle auf Smartphone-Apps sind, die vermutlich allesamt keine Untersttzung dafr eingebaut haben und, wenn sie nicht gerade dieses Jahr ganz neu entwickelt wurden, nur Features von Mastodon 3 abdecken. Das heit, viele Smartphone-Apps drften berhaupt keine Bedienelemente fr Filter haben, weshalb sehr viele Mastodon-User nicht wissen, da Mastodon berhaupt Filter hat.
Aber selbst wenn sie von filtergenerierten Inhaltswarnungen wten, wrden sie sie nicht nutzen. Erstens zu anstrengend. Zweitens noch anstrengender, wenn man erst diesen blden nervigen Safari oder Chrome starten, die eigene Mastodon-Instanz googlen, sich auf der Weboberflche einloggen und sich ans eigene Mastodon-Pawort erinnern und dann auf der Weboberflche, die man nie nutzt, die Filter suchen mu. Genau deswegen filtern viele
berhaupt nicht. Ist bequemer, nach der Moderation zu plrren, wenn einen was strt, damit die das wegmacht.
Und drittens "macht" man das auf Mastodon nicht "so". Die gesamte Mastodon-Kultur, die viele versuchen, dem ganzen Fediverse aufzuzwingen, basiert auf dem Stand von Mastodon von Ende 2022, Anfang 2023. Nicht nur gab es damals noch keine CW-generierenden Filter, sondern 95% der Fediverse-Nutzer hatten von Mastodon noch gar keine Ahnung.
Also besteht man darauf, Inhaltswarnungen von allen anderen Nutzern im Summary-Feld serviert zu bekommen. Und zwar alle Inhaltswarnungen, die man selber haben will. Aber gleichzeitig auch nur genau die und keine anderen. Vor Katzenbildern mu zwingend jeder warnen, weil die nerven. Aber umgekehrt nerven Alkohol-Warnungen, man gnnt sich ja selber gerne mal ein lecker Bierchen.
Da es dabei zwingend zu Interessenskonflikten kommen kann, scheint auf Mastodon noch niemand begriffen zu haben. Und da diese nervenden Alkohol-Warnungen da sind, weil tausende Mastodon-Nutzer genau diese Alkohol-Warnungen wirklich brauchen und eingefordert haben, auch nicht.
Ein anderes Beispiel: Hubzilla, (streams) und hchstwahrscheinlich auch Forte haben umfangreiche, detaillierte Berechtigungseinstellungen, sogar
individuell pro Kontakt, wo Mastodon nur Mute, Block und Moderatoren rufen hat. Auf (streams) sind sie sogar noch ein bichen effizienter.
Trotzdem wechselt da von Mastodon kaum einer hin. Nach Hubzilla selten, nach (streams) berhaupt nicht.
Okay, man mte wieder umziehen und was Neues lernen. Okay, die Weboberflchen sind nicht die benutzerfreundlichsten. Okay, fr Hubzilla gibt's nur eine seit fast fnf Jahren nicht mehr gepflegte Android-App und fr iOS sowie fr (streams) gar nichts. Okay, sie alle funktionieren nicht mit Mastodon-Apps. Okay, es gibt keine automatisch generierten ffentlichen Listen von (streams)-Instanzen auer auf (streams)-Instanzen.
Aber sollten nicht die Vorteile die Nachteile berwiegen Irgendeine Krte mu man ja schlucken. Dafr bekme man nicht nur nomadische Identitt und fast unendlich Zeichen, sondern auch einen groen Schwung Selbstermchtigung durch Berechtigungen pro Nase und optional sogar Filter pro Nase.
Und auch wenn einer Umfrage zufolge mindestens 75% aller Fediverse-Nutzer noch nie von Hubzilla gehrt haben: So ganz unbekannt sind Mikes Sachen nicht mehr. Mike macht ja inzwischen selber Werbung im Fediverse, und zwar auch mit (streams)' Mitteln der Selbstermchtigung.
Genau die will aber keiner. Ich wage zu behaupten, diese Werbung schreckt Mastodon-Nutzer nur zustzlich ab. Die wollen nicht ermchtigt werden, sich selbst zu moderieren. Die wollen gar nix selber machen. Die wollen nur noch mehr verhtschelt werden.
Das ist auch einer der Hauptgrnde, warum es nie ein BlackStreams oder BlackForte geben wird.
CC:
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Mastodon #
Friendica #
Hubzilla #
Streams #
(streams) #
Forte #
CW #
CWs #
CWMeta #
ContentWarning #
ContentWarnings #
ContentWarningMeta #
Inhaltswarnung #
Inhaltswarnungen #
InhaltswarnungenMetaNew studies estimate -COVID rates, identify risk factors
About 34% of infected postmenopausal women had symptoms for 8 weeks or more, while 61% of survivors in a second study had symptoms at 2 years.
Ja, aber viele Mastodon-Nutzer sehen Mastodon als den
Client.
Fr die sind Pleroma, Akkoma, Misskey, Firefish, Friendica & Co. nichts anderes als Mona, IceCubes, Tusky, Fedilab & Co.: alternative grafische Frontends fr dieselbe Sache. Nur da die ersteren Webfrontends sind und die letzteren Handy-Apps.
Natrlich geht das fast immer einher mit dem unerschtterlichen "Wissen", da Eugen Rochko das Fediverse erfunden hat.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Mastodon #
NichtNurMastodonvanilla mastodon is somehow the only software that doesn't allow admins to adjust the character limit.
They can adjust the character limit. But it requires digging into the source code because the limit is hard-coded. Basically, if you as an admin want a higher character limit, you have to soft-fork Mastodon.
And whenever you upgrade your instance to a new version, it's force-rebased to vanilla, and you have to soft-fork it and apply your changes again.
Better yet: Changing the character limit by modifying the source code is, technically speaking, illegal. It would only be compliant with the GNU Affero General Public License if the admin uploaded the modified code to GitHub as a fork of vanilla Mastodon. But no Mastodon admin even only goes through the hassle of learning git just to have more characters.
CC:
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
AGPL #
Mastodon #
CharacterLimit #
CharacterLimits #
500Characters And while I'm at it, here's a quote-post of my comment in which I review the second AI description.
It's almost hilarious how clueless the AI was again. And how wrong.
First of all, the roof isn't curved in the traditional sense. The end piece kind of is, but the roof behind it is more complex. Granted, unlike me, the AI can't look behind the roof end, so it doesn't know.
Next, the roof end isn't reflective. It isn't even glossy. And brushed stainless steel shouldn't really reflect anything.
The AI fails to count the columns that hold the roof end, and it claims they're evenly spaced. They're anything but.
There are three letters "M" on the emblem, but none of them is stand-alone.There is visible text on the logo that does provide additional context: "Universal Campus", "patefacio radix" and "MMXI". Maybe LLaVA would have been able to decipher at least the former, had I fed it the image at its original resolution of 2100x1400 pixels instead of the one I've uploaded with a resolution of 800x533 pixels. Decide for yourself which was or would have been cheating.
"Well-maintained lawn". Ha. The lawn is painted on, and the ground is so bumpy that I wouldn't call it well-maintained.
The entrance of the building
is visible. In fact, three of the five entrances are. Four if you count the one that can be seen through the glass on the front. And the main entrance is marked with that huge structure around it.
The "few scattered clouds" are mostly one large cloud.
At least LLaVA is still capable of recognising a digital rendering and tells us how. Just you wait until PBR is out, LLaVA.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
AI #
LLaVA #
Long #
LongPost #
CWLong #
CWLongPost #
VirtualWorlds #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImagDescriptionMeta #
LLaVA #
AI #
AIVsHuman #
HumanVsAI And since you obviously haven't actually read anything I've linked to, here's a quote-post of my comment in which I dissect the first AI description.
(This is actually a comment. Find another post further up in this thread.)Now let's pry LLaVA's image description apart, shall we
The image appears to be a 3D rendering or a screenshot from a video game or a virtual environment.
Typical for an AI: It starts vague. That's because it isn't really sure what it's looking at.
This is not a video game. It's a 3-D virtual world.
At least, LLaVA didn't take this for a real-life photograph.
It shows a character
It's an
avatar, not a character.
standing on a paved path with a brick-like texture.
This is the first time that the AI is accurate without being vague. However, there could be more details to this.
The character is facing away from the viewer,
And
I can and do tell the audience in my own image description
why my avatar is facing away from the viewer. Oh, and that it's the avatar of the creator of this picture, namely myself.
looking towards a sign or information board on the right side of the image.
Nope. Like the AI could see the eyeballs of my avatar from behind. The avatar is actually looking at the cliff in the background.
Also, it's clearly an advertising board.
The environment is forested with tall trees and a dense canopy, suggesting a natural, possibly park-like setting.
If I'm generous, I can let this pass as not exactly wrong. Only that there is no dense canopy, and this is not a park.
The lighting is subdued, with shadows cast by the trees, indicating either early morning or late afternoon.
Nope again. It's actually late morning. The AI doesn't know because it can't tell that the Sun is in the southeast, and because it has got no idea how tall the trees actually are, what with almost all treetops and half the shadow cast by the avatar being out of frame.
The overall atmosphere is calm and serene.
In a setting inspired by thrillers from the 1950s and 1960s. You're adorable, LLaVA. Then again, it was quiet because there was no other avatar present.
There's a whole lot in this image that LLaVA didn't mention at all. First of all, the most blatant shortcomings.
First of all, the colours. Or the lack of them. LLaVA doesn't say with a single world that everything is monochrome. What it's even less aware of is that
the motive itself is monochrome, i.e. this whole virtual place is actually monochrome, and the avatar is monochrome, too.
Next, what does my avatar look like Gender Skin Hair Clothes
Then there's that thing on the right. LLaVA doesn't even mention that this thing is there.
It doesn't mention the sign to the left, it doesn't mention the cliff at the end of the path, it doesn't mention the mountains in the background, and it's unaware of both the bit of sky near the top edge and the large building hidden behind the trees.
And it does not transcribe even one single bit of text in this image.
And now for what I think should really be in the description, but what no AI will ever be able to describe from looking at an image like this one.
A good image description should mention where an image was taken. AIs can currently only tell that when they're fed famous landmarks. AI won't be able to tell from looking at this image that it was taken at the central crossroads at Black White Castle, a sim in the OpenSim-based Pangea Grid anytime soon. And I'm not even talking about explaining OpenSim, grids and all that to people who don't know what it is.
Speaking of which, the object to the right. LLaVA completely ignores it. However, it should be able to not only correctly identify it as an OpenSimWorld beacon, but also describe what it looks like and explain to the reader what an OpenSimWorld beacon is, what OpenSimWorld is etc. because it should know that this can not be expected to be common knowledge. My own description does that in round about 5,000 characters.
And LLaVA should transcribe what's written on the touch screen which it should correctly identify as a touch screen. It should also mention the sign on the left and transcribe what's written on it.
In fact,
all text
anywhere within the borders of the picture should be transcribed 100% verbatim. Since there's no rule against transcribing text that's so small that it's illegible or that's so tiny that it's practically invisible or that's partially obscured or partially out of frame, a good AI should be capable of transcribing such text 100% verbatim in its entirety as well. Unless text is too small for me to read in-world, I can and do that.
And how about not only
knowing that the advertising board is an advertising board, but also mentioning and
describing what's on it Technically speaking, there's actually a lot of text on that board, and in order to transcribe it, its context needs to be described. That is, I must admit I was sloppy myself and omitted a whole lot of transcriptions in my own description.
Still, AI has a very very long way to go. And it will never fully get there.
#
Long #
LongPost #
CWLong #
CWLongPost #
AltText #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
AI #
LLaVA #
Long #
LongPost #
CWLong #
CWLongPost #
VirtualWorlds #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImagDescriptionMeta #
LLaVA #
AI #
AIVsHuman #
HumanVsAIWithout any context
The context
matters. A whole lot.
A simple real-life cat photograph can be described in a few hundred characters, and everyone knows what it's all about. It doesn't need much visual description because it's mainly only the cat that matters. Just about everyone knows what real-life cats generally look like, except from the ways they differ from one another. Even people born 100% blind should have a rough enough idea what a cat is and what it looks like from a) being told it if they inquire and b) touching and petting a few cats.
Thus, most elements of a real-life cat photograph can safely be assumed to be common knowledge. They don't require description, and they don't require explanation because everyone should know what a cat
is.
Now, let's take the image which LLaVA has described in 558 characters, and which I've
previously described in 25,271 characters.
For one, it doesn't focus on anything. It shows an entire scene. If the visual description has to include what's important, it has to include everything in the image because everything in the image is important just the same.
Besides, it's a picture from a 3-D virtual world. Not from the real world. People don't know anything about this kind of 3-D virtual worlds in general, and they don't know anything about this place in particular. In this picture,
nothing can safely be assumed to be common knowledge. For blind or visually-impaired users even less.
People may want to know where this image was made. AI won't be able to figure that out. AI can't examine that picture and immediately and with absolute certainty recognise that it was created on a sim called Black-White Castle on an OpenSim grid named Pangea Grid, especially seeing as that place was only a few days old when I was there. LLaVA wasn't even sure if it's a video game or a virtual world. So AI won't be able to tell people.
AI doesn't know either whether or not any of the location information can be considered common knowledge and therefore necessarily to explain so humans will understand it.
I, the human describer, on the other hand, can tell people where exactly this image was made. And I can explain it to them in such a way that they'll understand it with zero prior knowledge about the matter.
Next point: text transcripts. LLaVA didn't even notice that there is text in the image, much less transcribe it. Not transcribing every bit of text in an image is sloppy not transcribing
any text in an image is ableist.
No other AI will even be able to transcribe the text in this image, however. That's because no AI can read any of it. It's all too small and, on top of that, too low-contrast for reliable OCR. All that AI has is the image I've posted at a resolution of 800x533 pixels.
I myself can see the scenery at nigh-infinite resolution by
going there. No AI can do that, and no LLM AI will ever be able to do that. And so I can read and transcribe all text in the image 100% verbatim with 100% accuracy.
However, text transcripts require some room in the description, also because they additionally require descriptions of
where the text is.
I win again. And so does the long, detailed description.
Would you rather have alt text that is:
I'm not sure if this is typical Mastodon behaviour because it's impossible for Mastodon users to imagine that images can be described elsewhere than in the alt-text (they can, and I have), or if it's intentional trolling.
The 25,271 characters did not go into the alt-text! They went into the post.
I can put so many characters into a post.
I'm not on Mastodon. I'm on Hubzilla which has never had and still doesn't have any character limits.
In the alt-text, there's a separate, shorter, still self-researched and hand-written image description to satisfy those who absolutely demand there be an image description in the alt-text.
25,271 characters in alt-text would cause Mastodon to cut 23,771 characters off and throw them away.
#
Long #
LongPost #
CWLong #
CWLongPost #
VirtualWorlds #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImagDescriptionMeta #
LLaVA #
AI #
AIVsHuman #
HumanVsAI Add your location to a Google Map as the addon/app thingie still has to be added by the admin for you to be able to use it
They
do come pre-installed on the server.
Hubzilla is installed and upgraded using git. And in doing so, not only is sourced, but so are always automatically and . All that stuff is on the server hard drive from the get-go.
What the admin can do is activate and deactivate access to the addons. In the case of PubCrawl, it's activated after installation by default.
It's not like the hubmin installs Hubzilla without PubCrawl,
then clones an independent git repository containing PubCrawl into the Hubzilla server, and
then goes to the admin interface to switch it on.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Hubzilla Prediction: Alt text will be generated by AI directly on the consumer's side so that *they* can tell what detail, information density, parts of the picture are important for *them*. And pre-written alt text will be frowned upon.
Won't happen.
Maybe AI sometimes happens to be as good as humans when it comes to describing generic, everyday images that are easy to describe. By the way, I keep seeing AI miserably failing to describe cat photos.
But when it comes to extremely obscure niche content, AI can only produce useless train wrecks. And this will never change. When it comes to extremely obscure niche content, AI not only requires full, super-detailed, up-to-date-by-the-minute knowledge of all aspects of the topic, down to niches within niches within the niche, but it must be able to explain it, and it must know that and inhowfar it's necessary to explain it.
I've pitted against my own hand-written image descriptions.
Twice. Not simply against the short image descriptions in my alt-texts, but against the full, long, detailed, explanatory image descriptions in the posts.
And LLaVA failed so, so miserably. What little it described, it often got it wrong. More importantly, LLaVA's descriptions were nowhere near explanatory enough for a casual audience with no prior knowledge in the topic to really understand the image.
in five seconds are no match against that took me eight hours to research and write.
in 30 seconds are no match against that took me two full days to research and write.
When I describe my images, I put abilities to use that AI will never have. Including, but not limited to the ability to join and navigate 3-D virtual worlds. Not to mention that an AI would have to be able to deduce from a picture where exactly a virtual world image was created, and how to get there.
So no, ChatGPT won't write circles around me by next year. Or ever. Neither will any other AI out there.
#
Long #
LongPost #
CWLong #
CWLongPost #
VirtualWorlds #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImagDescriptionMeta #
LLaVA #
AI #
AIVsHuman #
HumanVsAI Quick and easy solution:
Move away from Mastodon. Go someplace else in the Fediverse that doesn't have a 500-character limit.
Move to Pleroma (5,000 characters, configurable by admin).
Or Akkoma (5,000 characters, configurable by admin).
Or Misskey (3,000 characters, hard-coded).
Or one of the several Misskey forks (thousands of characters again, configurable by admin).
Or Friendica (unlimited characters).
Or Hubzilla (unlimited characters).
Or (streams) (unlimited characters that is, the database can handle a bit over 65,000 characters on older instances with no database upgrade and over 24,000,000 characters otherwise).
All of them are in the Fediverse. The Fediverse is not only Mastodon. All of them are connected to Mastodon. Proof: This comment comes from Hubzilla.
And then do something that nobody on Mastodon would even come up with the idea to do:
Put the excerpt
into the post.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
NotOnlyMastodon #
FediverseIsNotMastodon #
MastodonIsNotTheFediverse #
Pleroma #
Akkoma #
Misskey #
Forkey #
Forkeys #
Firefish #
Iceshrimp #
Sharkey #
Catodon #
Friendica #
Hubzilla #
Streams #
(streams) #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
CharacterLimit #
CharacterLimits #
500CharactersThe First New Long Wave Radio Station Of This Millennium -wave
Well, that's what it is. It's the wording chosen back when add-ons were renamed "apps".
Basically, you "install" the "app" on your channel. It integrates the functionality of the "app" into your channel instead of just turning it on, and it adds or makes it possible to add links to the "app" to the navigation bar and/or the burger menu.
I'm not a Hubzilla dev. But it could be a different case of how add-ons work. I guess add-ons on Friendica are integrated into your account right away, you just turn them on and off, whereas on Hubzilla, "installing" an "app" does not turn the existing functionality in your channel on, but it
builds the functionality into your channel in the first place. Especially PubCrawl is something that digs deeply into how your channel works, especially how nomadic identity works.
Don't forget that Hubzilla is not Friendica with nomadic identity and wikis and a different protocol. When Mike turned Friendica into Red, he re-wrote the entire backend from scratch.
This wording may actually be as old as Red. Trying to have it changed now would be like complaining that identities plus content are named "channels" on Hubzilla (and everything that has come after Hubzilla) because the term "channels" is being used in different contexts by other projects nowadays. It'd be like complaining that Hubzilla, like Friendica before it, has its own term for a server instance instead of officially calling it an "instance" or a "server".
#
Long #
LongPost CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Hubzilla Misskey did it right, the Forkeys even more so.
Unlike Mastodon, they didn't aim to be purist Twitter clones. Misskey wanted to do microblogging, but without Twitter's unnecessary limitations and with some cool extra features, partly appealing particularly to a Japanese target audience.
It certainly helped that Misskey was originally launched in 2014, some two years before Mastodon.
Friendica, created in 2010 as Mistpark, is a similar case. It aimed to be an alternative to Facebook, but not a 1:1 Facebook clone. The idea was to build something that does the same thing as Facebook in similar ways as Facebook, but without what limited Facebook and without what sucked about Facebook.
And of course, it got a big pile of cool extra features on top that could be useful.
For example, circles. You may perceive them as either Mastodon's lists done right or a clone of Google+'s circles. Actually, however, Google+ was a clone of Diaspora*, Google+'s circles were a clone of Diaspora*'s aspects, and what are Friendica's circles today used to be Mistpark's groups. And Mistpark pre-dated even Diaspora*, so Mistpark had them first.
Also, next to being an alternative to Facebook, Mistpark was equipped as a full-blown blogging engine. No character limit. Full set of text formatting, up to and including headlines, lists and tables. In-line embedding of images and other media which can be uploaded to the built-in file storage before embedding them. StatusNet's summary field (which Mastodon repurposed into a content warning field seven years after Mistpark's launch in spite of being federated with both StatusNet's successor GNU social and Friendica) and a separate title field. A tag cloud. And so forth.
But we can also see it in the Threadiverse. First there were Lemmy, an attempt at a faithful clone of Reddit that also aimed at replacing Hacker News, and Lotide which fell to the wayside. Shortly before the Reddit enshittification, /kbin appeared. It added some interesting extra features, but it could never really mature due to being overrun at an early alpha stage.
Now we also have Mbin, a /kbin fork that improves on it, PieFed and Sublinks, all of which are more than mere Reddit or Hacker News clones.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Misskey #
Forkey #
Forkeys #
Mistpark #
Friendica #
Threadiverse #
Lemmy #
/kbin #
Mbin #
PieFed #
SublinksI'm thinking about adding at least one of my channels to . I mean, it isn't like I don't have enough followers they've risen above 500 again. But Trunk would help people follow me for a better reason than just one cool post or comment, all still without having to figure out how to check my profile.
That said, Trunk requires you to volunteer on at least one list, in at least one topic. That's where things get difficult.
For one, there's
. I'm not even kidding: It's a list for people who describe the media which they post. People who add alt-text to their images. Even though everybody in the Fediverse is expected to do it all the time, at least if their posts reach Mastodon in some way.
I do it. But I don't do it "the standard Mastodon way". For one, Mastodon's limitations, especially the 500-character limit for posts, don't apply to me. I don't have any character limit in my posts. Thus, nothing forces me to describe and even explain an image only in alt-text because I've got plenty of space in my posts.
Besides, , especially taking all those typical image description guidelines into consideration. That's because none of them are prepared for the edge-cases that are my images. And with "absolutely massive", I don't mean, "800 characters Are you nuts! Who's gonna read that!" I mean up to , and I can guarantee you this is not a typo. Maybe even more in the future.
I'm not quite convinced that I'm a good example of a provider of media descriptions, partly because by adhering to general image description rules, I break most of Mastodon's image description rules, partly because next to nobody has the patience to read one image description that's longer than 120 toots or have it read to them by a screen reader, partly also because my own image descriptions become obsolete so quickly whenever I discover something new that I should do in image descriptions.
Even if none of this mattered, I don't post images often. Maybe once every couple months. That's because I have to
schedule my image posts due to how much time they consume. The 60,000-character description took me two full days to research and write, breakfast to after dinner. And it might become even rarer in the future. I've started to be able to post images with sensitive content, including but not limited to eyes and faces. But posting these will eat up the time I could also use to post perfectly safe images on this Hubzilla channel.
The Described Media list is rather for people who routinely whip up 200 characters of alt-text in under a minute or so, but who do so at least daily.
An even more obvious list, at least at first glance, would be
, seeing as the primary topic of this channel is . In fact, in the long run, I could add two or three channels to this list.
But OpenSim does not fit on it. The list is for
actual virtual reality, for new virtual reality and augmented reality developments of the 2020s. "The Metaverse" as envisioned by most. It absolutely requires VR or AR headsets, full stop.
OpenSim has been using the term "metaverse" routinely since as early as 2007, the year of its inception. But the list is not about "metaverse". It's about VR.
And OpenSim is what's commonly called a "pancake". It's made for desktop and laptop computers and their 2-D screens. It does not really work on VR headsets. It does not work on stand-alone VR headsets with integrated graphics hardware at all. That's mainly because VR headsets require a constantly guaranteed frame rate of 60fps. It isn't simplified and cartoonish and geared towards mobile graphics hardware like Horizons or Rec Room or the like. Instead, it's largely photo-realistic, high-detail stuff with high-resolution textures.
You may get 60fps out of a dedicated graphics unit on a not-too-highly-detailed sim when you're alone. But have more than a few avatars around, and your fps will drop below 60. Join a party or any other event with a couple dozen avatars, and you're heading for slideshow-level fps. That's because the avatars aren't made by the OpenSim devs and optimised for high performance. They mostly entirely consist of user-supplied stuff and optimised for good looks. Some two years ago, one average avatar had more vertices than an entire scene in World of Warcraft. They've only gotten much, much more complex since then.
A liquid-cooled 4090Ti overclocked to kingdom come won't give you 60fps at 1080p at 's Event Plaza on a Friday night. So, what chances does a stand-alone, passively-cooled headset based on phone hardware have if it has to whip up even more pixels And none of this is even taking recently-introduced Physically-Based Rendering into account which absolutely requires dedicated graphics hardware with no less than 4GB of dedicated VRAM, preferably at least 8GB.
That is, you couldn't use OpenSim on a stand-alone headset anyway. There are only two OpenSim-compatible viewers available right now, they're only available for desktop operating systems, and their highly complex UIs (pull-down menus like you've last seen in Photoshop etc.) are entirely geared towards desktop and laptop computers.
In brief: OpenSim is not VR, and it's unlikely to ever truly become VR.
Okay, I still have the option to ask one of the four Trunk admins to add an extra "Virtual Worlds" list, arguing that OpenSim, just like Second Life, is not VR and thus doesn't fit onto a VR & AR list. But they might argue that it's close enough to VR & AR for a separate list not being justified.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
MediaDescription #
MediaDescriptions #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
OpenSim #
OpenSimulator #
Metaverse #
VirtualWorlds #
VR #
VirtualReality #
AR #
AugmentedReality #
Trunk FenTiger One Fediverse server instance recognising people from other instances being logged in.
For example, here on Hubzilla, I could theoretically allow only confirmed connections to see my profile. So if someone opens my profile page, they won't see anything.
Now let's suppose one of my connections is on something with client-side OpenWebAuth support. Friendica has client-side support, Hubzilla and (streams) have full support. And let's suppose that someone is logged in using the same Web browser that they use to visit my profile page.
OpenWebAuth automatically acts as magic single sign-on. It recognises their login and their identity, and it shows them my profile because I've allowed them to see it. On the very same page on which everyone else wouldn't see anything.
I could give such permission for many other things, i.e. to allow people to post to my wall, to allow them to see my files and images, to allow them to upload stuff to my file space, to allow them to edit my wikis (unfortunately only all of them, not individually for each wiki) etc.
A practical use-case for such permissions in combination with OpenWebAuth would be if I ran a channel as a public forum. In that case, I could give a bunch of trustworthy Hubzilla or (streams) users additional permissions so that they can moderate the forum, at least partly even from their own existing channels.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Hubzilla #
OpenWebAuth #
SingleSignOnc) ActivityPub is off by default, not to mention d) how to turn it on which is anything but straight-forward.
Holly cow. I guess I'll stay on Friendica for the foreseeable future :D
To be fair, ActivityPub is off by default because it makes nomadic identity more difficult on the level that Zot6 offers. And it's off by default at channel level only whereas it's on by default at hub level.
With "anything but straight-foward", I mean it isn't like on (streams) where you have an ActivityPub on/off switch in the settings which is usually even on when you first discover it.
Instead, ActivityPub is an add-on, an "app" that has to be "installed". But newbies don't expect Hubzilla to have add-ons at all because Twitter has none, and Mastodon has none. And they certainly wouldn't expect ActivityPub, of all things, to be an add-on and off by default. In fact, most probably join Hubzilla in the belief that it is
based on ActivityPub like "everything else" in the Fediverse.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Hubzilla #
Streams #
(streams) #
ActivityPub #
NomadicIdentity The only Hubzilla forums I'm aware of that are more or less active are Hubzilla-centric themselves. The Hubzilla support forum which is pretty much useless if you aren't on Hubzilla yourself. Or a kind of planning forum for the advancement of the Hubzilla ecosystem which I'd even expect to have ActivityPub off.
The problem is simply that even only very very few people take the step from one of the ActivityPub-based microblogging projects to Friendica, just to have a forum. My estimation is that every other Mastodon user still "knows" that the Fediverse is only Mastodon. Of the rest, some have never heard of Friendica, and even more don't know that Friendica has forums that Mastodon users can join.
I guess not few of those who try their luck quickly discover that Friendica is, in fact,
not a pure group/forum platform,
not "Guppe, but it's moderated", and too confusing and difficult to wrap their minds around. The step from Mastodon to Friendica is much bigger than the step from Twitter to Mastodon.
And that's Friendica. It's even worse with Hubzilla.
There has been a poll which showed that 75% of those who took it had never heard of Hubzilla before this poll. And I guess the audience was actually biased, also because it wasn't on Mastodon AFAIR, so the real number has to be even higher. Amongst those who have heard of Hubzilla, its capabilities are even murkier than Friendica's.
Hubzilla's Web UI is basically what Friendica's UI was a dozen years ago, and it capitulates before Hubzilla's sheer amount of features. Its documentation is both incomplete and hopelessly outdated in many parts, and it's written like a tech spec rather than a user manual.
Lastly, most Mastodon-to-Hubzilla direct converts throw in the towel the same day they took the jump. That's because they fail to connect to anyone on Mastodon. That, in turn, is because it takes another, experienced Hubzilla user to tell them that a) Hubzilla is not based on ActivityPub, b) ActivityPub is optional on Hubzilla, and c) ActivityPub is
off by default, not to mention d) how to turn it on which is anything but straight-forward.
That's why hardly anyone runs a forum on Hubzilla.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Groups #
Forums #
Friendica #
Hubzilla between Misskey, now dead FoundKey, Firefish, Iceshrimp.NET, Sharkey and Catodon.
Sharkey is said to have a terrible Mastodon API implementation, though.
Also, in case you haven't noticed, is on Friendica which should be compatible with both Mastodon, Misskey and their respective forks. We've had these federated groups for five and a half years longer than we've had Mastodon.
Granted, Friendica does not specialise in groups/forums, but still.
Hubzilla has federated public and private forums, too, with even more features. Its permissions system makes it possible for other Hubzilla or (streams) users to moderate forums by becoming co-admins, not to mention that Hubzilla and (streams) users can moderate their own threads. But I've yet to see Hubzilla forums in action with Mastodon and *key users.
(streams) is similar with even more advanced permission controls, and it doesn't come with quite as many extra optional features that you'll never need as Hubzilla. But it's being redesigned on the inside to introduce nomadic identity via ActivityPub which has recently caused problems in federation with anything that doesn't use (streams)' own Nomad protocol. I'll have to check whether these issues are gone in the current release.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Misskey #
Forkey #
Forkeys #
Firefish #
FoundKey #
Iceshrimp #
Iceshrimp.NET #
Sharkey #
Catodon #
Groups #
Forums #
Friendica #
Hubzilla #
Streams #
(streams)Chillingly, most people who develop long COVID did not have particularly vicious cases of the virus initially.
Thats in part because so many more people experience a mild form of COVID rather than a severe one.
(Across most studies, long COVID risk does increase with the severity of the initial infection.)
And each time people become reinfected with the virus, theyre at risk of developing long COVID,
even if they didnt experience it previously.
The authors note that studies on recovery from long COVID are sparse and inconsistent.
But those that have closely evaluated individual manifestations of the virus have found recovery rates to be fairly low at one year,
and only 7% to 10% fully recovered after two years.
For millions and millions of people, the debilitating effects of long COVID are just that.
The economic toll is its own story.
A Census Bureau Household Pulse Survey in 2022 found that between
2 million and 4 million working-age Americans were out of work because they were sidelined with the effects of long COVID.
Meanwhile 20% of people with long COVID surveyed by the United Kingdoms Trades Union Congress said they were not working.
Another 16% were working reduced schedules.
The $1 trillion estimated annual global economic hit involves Organization for Economic Cooperation and Development (OECD) countries due to
reductions in quality of life and labor force participation,
the Nature Medicine review says.
And that price tag does not factor in the direct costs of healthcare, another likely category of deep financial fissure.
Al-Aly and his co-authors are pleading for governments,
especially U.S. health agencies,
to dramatically upgrade their levels of activity to investigate long COVID,
learn more about its mechanisms and pathways,
and develop vaccines that better block infection.
Theyre also pushing for large-scale platform trials to test multiple drugs simultaneously,
so we can quickly learn what works and what doesnt.
They have an ally in Congress.
U.S. Sen. Bernie Sanders recently introduced the Long COVID Research Moonshot Act,
a bill that would guarantee $1 billion per year for a decade to the National Institutes of Health
to support long COVID research,
the urgent pursuit of treatments,
and the expansion of care for patients across the country.
Sanders news release announcing the act puts the long COVID count in the U.S. at
22 million, including a million children.
Among other things, the bill would require NIH to establish a long COVID and a to speed up clinical trials,
and to make any subsequent treatments so that every patient could receive it.
The legislation that we have introduced finally recognizes that long COVID is a public health emergency, Sanders said.
Congress must act now to ensure treatments are developed and made available for Americans struggling with long COVID.
If it accomplishes nothing else, Sanders proposal may help lift the veil on long COVID in the country and around the world.
Its a topic that has been largely pushed into the shadows,
as governmental agencies
and official policy
tried to construct a reality in which COVID in general was just no longer a significant issue.
That was never the case, and its certainly not the case now.
The reality is otherwise, Al-Aly says.
As the virus mutates into new strains and continues to affect millions, the long tail of COVID is once again forcing itself to center stage.
Multiple COVID subvariants, collectively nicknamed ,
are powerfully present in the U.S.,
and reports from California indicate that some patients are complaining of so strong it feels like theyre
swallowing razors or broken glass,
according to the Los Angeles Times.
Topol, meanwhile, says the Sato Lab in Japan has characterized one of the newest COVID strains,
KP.3.1.1, in a preprint as having
the most immune evasion and infectivity of any of the variants
derived from previous powerful iterations of the JN.1 strain, which was prominent last winter.
Although the Centers for Disease Control and Prevention says that severe outcomes and deaths from COVID have decreased substantially overall from the pandemics early days,
wastewater data shows viral activity is high nationally
and COVID-19 infections are growing or likely growing in 35 states.
More COVID infections mean more cases of long COVID.
And is already exacting an enormous toll on both the people and economies of the world.
Those are words you arent hearing from many government bodies.
But, the researchers say, the evidence tells the story.
Despite the dire impact of long COVID on individuals and society,
I fear that many are still unaware of the danger, says Akiko Iwasaki, professor of immunology at Yale School of Medicine and co-lead investigator of the universitys COVID-19 Recovery Study.
There is an urgent need to provide proper diagnosis and treatment for people living with long COVID.
The authors lay out a number of preventive policy , including
increased use of masking,
improved ventilation systems
and a vaccination program that pairs COVID shots with season flu shots to extend their reach to improve uptake.
But theres a genuine question as to whether enough folks are paying close attention for any of this to matter.
As the authors point out,
a survey found that as of last August,
one-third of American adults still hadnt even heard of long COVID.
In reality, long COVID was identified and defined in the first full year of the pandemic, 2020,
and it has been increasing its case count ever since.
This form of COVID is particularly perilous because, for many people, its symptoms may last years
(or a lifetime)
and their effects may trigger all sorts of associated problems and costs.
Long COVID affects nearly every organ system, the review notes,
including the cardiovascular, immune, gastrointestinal and reproductive systems.
While more than 200 symptoms have been identified, common symptoms include
memory problems,
difficulty concentrating,
fatigue,
heart palpitations,
chronic cough,
shortness or breath
and recurring headaches.
is a $1 trillion problem with no cure.
Experts plead for governments to wake up
For months, governmental officials around the world have appeared to want to forgo discussing the specter of long COVID.
As a new review makes clear, that is wishful thinking
and the latest COVID variants may well kick long COVID into overdrive,
a scenario that researchers and experts have been warning about for some time.
I think they (government agencies) are itching to that COVID is over and that long COVID does not exist, says Ziyad Al-Aly,
director of the Clinical Epidemiology Center at Veterans Affairs St. Louis Health Care System -- and lead author of the review.
It is much more pleasant to pretend as if emergency department visits and hospitalizations havent been rising sharply this summer.
Al-Aly and several other top researchers lay out a difficult truth:
Long COVID has already affected an estimated 400 million people worldwide,
a number the authors say is likely conservative,
at an economic cost of about $1 trillion annually
equivalent to 1% of the global economy.
Moreover, the of a person being hit with long COVID rises with repeated infections of the virus itself,
and recent COVID activity has experts watching closely.
As review co-author noted in a recent blog post,
the current COVID incursion is ramping up quickly,
with one modeler estimating
900,000 new infections per day in the U.S. alone.
The new significant wave, Topol said via X.
Its hard to believe the we are well into our fifth year of this virus and not taking the requisite steps to gain control.
Na ja, es gibt Leute, die von Facebook-Alternativen die Finger lassen wegen der bsen Assoziationen mit Facebook von Cambridge Analytica bis hin zur aktiven Untersttzung des Genozids an den Rohingya.
Aber es ist schon nachvollziehbar: Wenn man sich erst auf Twitter und mglichst originalgetreue Twitter-Klone eingeschossen hat, kommt man mit Sachen, die vllig anders funktionieren, nicht mehr so leicht klar und mu erst umstndlich komplett umdenken.
Zunchst mal: Friendica ist kein Klon des 2024er Facebook. Auch keiner des 2022er Facebook. Friendica wurde schon 2010 entwickelt, und zwar nicht als Klon, sondern als Alternative. Auch nicht als gleichwertige Alternative, sondern als bessere Alternative.
Ein paar Dinge an Friendica, die typisch Facebook und untypisch Twitter/Mastodon sind:
- Verbindungen sind standardmig gegenseitig. Du hast nicht Followers auf der einen Seite und Followed auf der anderen. Du hast nur das, was auf Facebook "Freunde" sind, nur ohne diesen kreuzdmlichen Namen, den sich so nur Amis ausdenken knnen.
- Nicht alles ist ein Post. Wenn es eine Antwort ist, ist es kein Post, sondern ein Kommentar. Jeder Thread hat genau einen Post ganz am Anfang, alles andere sind Kommentare. Genauso funktionieren auch Tumblr und Blogs.
- Du hast fr Kommentare andere Eingabefelder als fr Posts.
- Du kannst Posts gezielt an Listen schicken. Nur da die Listen nicht Listen heien, sondern Kreise, und im Vergleich zu Mastodon hundertmal einfacher zu managen sind. Inzwischen wohl sogar noch einfacher als auf Facebook selbst.
- Kein superknappes Zeichenlimit. Facebook versteht sich nicht als Mikroblog. Friendica auch nicht.
- Mastodon schlgt dir keine Kontakte vor. Friendica meines Wissens schon. brigens tut das auch alles, was von Friendica abstammt.
In Teilen ist Friendica aber noch nher an Blogs als an Facebook:
- berhaupt kein Zeichenlimit.
- Textformatierung. Und zwar das meines Wissens (ich bin schon seit Mitte der 2010er aus Friendica raus) das volle Programm, das auch HTML untersttzt. Friendica nutzt BBcode und kann optional auch Markdown nutzen.
- Im Text eingebettete Bilder. Beliebig viele.
- Eingebauter "Cloudspeicher" fr die Bilder. Du weit, wo deine Bilder liegen, weil du die da selber hingetan hast.
- Titel.
- Zusammenfassungen. Also das, was Mastodon fr CWs zweckentfremdet hat, aber immer noch als Zusammenfassungen. Geht zwar jetzt nur noch per BBcode-Tags, aber geht.
- Und auch hier wieder: Antworten sind keine Posts, sondern Kommentare. Und dafr gibt's eine eigene Eingabemaske.
Wer von Twitter und dann von Mastodon kommt, wird sich also von Mastodon nach Friendica mehr umgewhnen mssen als von Twitter nach Mastodon. Aber: Mit "Wieso knnen die das nicht mehr wie Mastodon machen, wo es doch im Fediverse ist" kommt man nicht weiter.
Brandaktueller englischsprachiger Lesetip von , die bisher quasi nur Mastodon kannte, dann aber Friendica ausprobiert hat und so geflasht ist, da sie gar nicht mehr weg will: .
Auch lesenswert ist von von Ende Oktober 2022, als Twitter gerade von Elon Musk bernommen worden war.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
FriendicaGiven that, I'm not sure how long a screen reader takes to read 40,000 character alt-text per image
Don't make the old mistake of taking "alt-text" and "image description" for being mutually 100% synonymous. Alt-text can be more than image description, and an image description does not always go into the alt-text.
The 40,000 characters don't go into the alt-text. If they did, Mastodon would chop 38,500 characters off and throw them away, as would Misskey and all forks of both.
The 40,000 characters go into the post text body. The toot, if you want. Where you have a limit of 500 characters, I have none at all. On Hubzilla, which supports in-line embedding of images, they go directly below the images.
Into the alt-text goes a second, different, separate, independent, much shorter image description.
Each one of my images usually has two descriptions. One full, long, detailed description with all explanations and transcripts in the post text body. One significantly shorter description in the alt-text.
The long description in the post text body is there to deliver all necessary information without Mastodon, Glitch, Ecko, Hometown, Misskey, Firefish, Iceshrimp, Sharkey, Catodon etc. etc. putting the axe to it at the 1,500-character mark.
The short description in the alt-text is there to satisfy those Mastodon users who absolutely demand there be an actually sufficient image description in the alt-text, no matter how. Even if there already is a much longer image description in the post.
After all, they can't see the image description in the post. The post is hidden behind a content warning because it is longer than 500 characters. I always put all posts that exceed 500 characters behind a Mastodon-style content warning that starts with a StatusNet/Friendica/Hubzilla-style summary of the post, followed at least by "CW: long post (
<number of characters in the post, raw count> characters)".
Sometimes I take the extra effort upon me to specify how many of these characters are actual post, and how many are image description, which makes it clear right away that there is an image description behind that CW.
What I always do is to the alt-text the extra information that a longer, more detailed image description with explanations and text transcripts can be found in the post.
Depending on how much room the short image description leaves me in the alt-text, I also add where exactly the long image description can be found. And that's two different locations, depending on whether you're on something that supports in-line images (e.g. Pleroma, Akkoma, Friendica, Hubzilla, (streams)) or not (e.g. Mastodon, Misskey and their forks).
The rest of us can just look at the image so we won't use alt-text as we read the post that accompanies the image for context.
On Mastodon, you'll only see the image and a content warning at first. The content warning starts with a summary, unusual for Mastodon, but standard on Hubzilla. Then it warns you about the post being long, including how long the post is. That'll most likely be a five-digit number.
When you open the content warning, you'll see a few hundred or a very few thousands characters of actual post text. Then, after two blank lines, comes the long image description, still in the post. And that's usually several tens of thousands of characters. If the post is recent enough, the image description is announced by a HTML headline so even those who don't have images in the post know where the actual post text ends and the long image description begins.
If you want to know from personal experience what an image post of mine looks like on Mastodon, search for the hashtag #
UniversalCampus. The second or third post from the top is a good example. It has a long image description in the post text body of not 40,000, but over 60,000 characters, my longest to date. The alt-text is precisely 1,500 characters long, 1,402 of which are image description.
By the way, is what the self-same post looks like on Hubzilla.
An example with multiple images can be found under the hashtag #
OSFest2023. You have to scroll down until you find a post whose content warning starts with the summary, "Classic creations by Arcadia Asylum a.k.a. Lora Lemon/Aley at OpenSimFest 2023". This one is hopelessly outdated now, by the way.
is the Hubzilla original.
#
Long #
LongPost #
CWLong #
CWLongPost #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMetazu doof, wenn man eine Suchmaschine nicht mal richtig bedinen kann.
Warum sollten sie nach etwas googlen, wovon sie nicht mal ahnen, da es vielleicht existieren knnte
Und solange du ihnen nicht ausdrcklich sagst, da es noch andere Sachen als Mastodon gibt, die auch im Fediverse sind und sich auch mit Mastodon verbinden, bersteigt das doch komplett deren Vorstellungsvermgen.
Wrdest du von dir selbst aus auf die Idee kommen, nach dezentralen, fderierten virtuellen 3-D-Welten auf Basis freier, quelloffener Software zu suchen, wenn ich dir nicht sagen wrde, da es sowas gibt Jedenfalls ist das Fediverse fr die, die noch nie davon gehrt haben, noch abwegiger und noch unvorstellbarer. Das ist der vierdimensionale Hyperwrfel des Internet.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Mastodon #
NichtNurMastodonGerade die Unis, ich verstehe nicht, wie man sich mit Mastodon so selber geieln kann Die legen sich selber Hand- und Fufesseln mit so einem begrenzten Dienst an, das man schreiend im Kreis rennen mchte.
Die wissen nicht, da es noch was anderes auer Mastodon gibt.
Weil ihnen nicht gesagt wird, da es noch was anderes auer Mastodon gibt.
Weil die ganzen Mastodonten, die die Unis ins Fediverse holen wollen, ihnen davon nichts erzhlen.
Weil jeder zweite von denen das selbst nicht wei und die andere Hlfte andere Grnde dagegen hat.
Und andere Quellen haben die Unis nicht. Anscheinend blderweise nicht mal die TUs.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Mastodon #
Unis4Mastodon #
UnisInsFediverse #
NichtNurMastodonIch bekomme das in den normalen Unterhaltungen mit, das da nur Mastodon als Echt angesehen wird.
Mir hat mal einer ins Gesicht gesagt, da Mastodon das absolut einzige Fediverse-Projekt ist, das "feature-complete" ist, und das ist eine Tatsache.
Ich htte ihm am liebsten eine ganze Liste von Hubzilla-Features vor den Latz geknallt, die teilweise auch noch unter Beweis gestellt und ihn gefragt, ob Mastodon das auch hat.
Aber die Leute "lernen" ja im allgemeinen im Zuge ihres Fediverse-Onboarding erst, da es nur Mastodon gibt, dann, da das Fediverse nur Mastodon ist, und dann, da Eugen Rochko das Fediverse und ActivityPub erfunden hat, Mastodon zuerst da war, Mastodon der Fediverse-Goldstandard ist und alles, was nicht Mastodon ist, nichts weiter als eine Alternativ-UI fr Mastodon ist.
Aber auch bei einer Kommunikation mit dem Aufruf fr Unis ins "Mastodon" und dann die Weigerung, dafr Fediverse einzusetzen.
La mich raten.
"'Mastodon' und 'Fediverse' ist doch genau dasselbe. Das meint beides dasselbe. Da gibt's keinen Unterschied."
"Die Leute und die Unis kennen weit eher Mastodon als das Fediverse. Mit dem Fediverse braucht man ihnen nicht kommen, das verstehen die nicht. Mastodon kennen die eher und verstehen die eher."
"Das darf fr die Unis nicht zu kompliziert sein. Es ist einfacher, sie direkt nach Mastodon einzuladen, als ihnen erst das ganze Fediverse zu erklren." Als wenn es an jeder Uni nur technikferne Dumpfbolzen gbe.
"Die Unis brauchen ja eh nur einen Ersatz fr Twitter." Und Akkoma, Firefish, Iceshrimp etc. sind kein Twitter-Ersatz Ach so, habt ihr nie von gehrt. Und die Unis kommen auch ganz bestimmt mit maximal 500 Zeichen aus Und mit der totalen Verweigerung der obersten Mastodon-Heeresleitung, HTML-Rendering zu untersttzen, weil das ja nicht Microblogging wre
Auerdem glaube ich, wenn eine TU oder TH erst einen eigenen Hubzilla-Hub aufgesetzt hat, geht denen einer ab angesichts der Mglichkeiten, die sie da aus einer Hand und an einer Stelle haben. Womglich kriegt Mario dann sogar PRs von Professoren oder Studis rein zwecks Verbesserung.
"Erstmal Mastodon, dann sehen wir weiter." Nee, dann sehen wir nicht weiter. Die meisten Privatanwender, die auf Mastodon landen, kommen da innerhalb des Fediverse kaum wieder weg, weil es keine Ein-Klick-Migration nach wohin-auch-immer fr Doofe innerhalb der eigenen Lieblings-Mastodon-App gibt. Und Institutionen wie Unis mit eigener Instanz sind noch sehr viel unflexibler. Selbst wenn die woandershin umziehen wollten, wre das ein Proze von Monaten oder Jahren, bis das alles durch wre.
Wie die dann mastodon mit Flipboard, Wordpress udn spter mal threads in einklang bringen wollen, ist mir schleierhaft.
Gar nicht.
Das bloe Konzept, die bloe Idee, da irgendwas anderes, was nicht Mastodon ist, mit Mastodon verbunden sein knnte, wird ihnen vollkommen fremd sein. Die Unis werden so onboarded wie die vielen privaten Twitter-Flchtlinge: Es gibt nur Mastodon, und Mastodon ist innen drin dezentral, aber nach auen ein eigener Walled Garden. Genau das Bild, das Rochko und die gGmbH stndig zu vermitteln versuchen.
Ich glaube, das drftest du selbst schon oft genug erlebt haben, wie Mastodon-Nutzer Ziegelsteine in ihre Hosen gekackt haben, als sie durch dich unerwarteterweise und auf die harte Tour das komplett Unvorstellbare erfahren haben: Das Fediverse ist nicht nur Mastodon. Mastodon ist auch noch mit anderen Sachen verbunden. Und ja, das ist normal und soll so.
Nur knnen einzelne Privatleute in Sekundenschnelle auf diese Erkenntnis reagieren. An einer Uni mu die Erkenntnis selbst erst ber verschiedene Leute und durch diverse Abteilungen und Gremien und so gehen, bis das auch wirklich alle wissen, die damit irgendwie zu tun haben. Das alleine kann Wochen dauern, wenn nicht Monate. Eine Reaktion darauf kann noch lnger dauern.
Und wenn dann einer von auerhalb von Mastodon mit der Uni-Instanz zu interagieren versucht, und das ist nicht exakt genauso, wie wenn jemand anders auf Mastodon das tut, dann starren doch die Verantwortlichen darauf wie die Rehe ins Fernlicht und wissen weder, was da gerade passiert, noch was sie tun sollen.
Es wird ja schon schwierig genug, Uni-Instanzen aktuell zu halten. Vielleicht nicht an einer TU mit notgedrungenermaen fhiger IT, weil es peinlich wre, wenn die eigenen Informatik-Ersties mehr Plan haben als die hauseigene IT. Aber berall sonst besteht das Einmal-installieren-und-nie-updaten-Risiko.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Mastodon #
Unis4Mastodon #
UnisInsFediverse #
NichtNurMastodonWell... I don't think anyone's necessarily expecting perfection. They just expect a reasonable and deliberate effort.
They may not expect perfection all over. But they expect full accessibility for
themselves.
There are some guidelines out there from WGBH and other sources that can help decide what is and isn't necessary to have a well-described image.
Trust me, I know these guidelines. I follow them to a tee as far as that's possible for me. They have influenced me greatly and still do.
Just recently, I've learned two new things from such a guide. One, dimensions must always be given as relative to a size that people, especially blind or visually-impaired people, are familiar with, e.g. human body height. Two, colours must always be described based on a specific selection of basic colours plus brightness/darkness plus saturation. Blind people have no concept of "cerise" or "crimson" or "mocha". These two points rendered all my previous image descriptions obsolete.
But my images are extreme edge-cases that put these guidelines and the general image accessibility rules to the test.
Let me give you a few examples.
First of all, in order to keep an image description, especially in alt-text, short and concise, the guides tell me to focus on what's important in the image in the context of the post, describe that and sweep everything else under the rug because it's unimportant.
This advice completely failed because there are
several dozen singular elements in the image that are important within the context of the post. I did describe everything in the images, but only a tiny fraction of the 40,000-character description of the first image is about the surroundings.
It failed even more and especially . In these cases, I wanted to show
the whole scenery with everything in it. Nothing was more or less important in the images within the context of the post than anything else. Thus, I had to describe everything because, in a sense, everything was important.
It also failed . Again, I wanted to show the whole scenery. But even if I had wanted to show only the central element of the image, the building, I would not have been able to shrink the image description to a sensible size. The building is so complex and so strange and unusual at the same time that a highly detailed and very long description of it was absolutely necessary and inevitable. It's for this reason why I normally try to avoid having buildings in my images.
Another advice is to mind the target audience and write for them. This advice always comes in guidelines for alt-texts on static Web sites or blogs. It never comes in guidelines for alt-texts in social media.
Static Web sites or blogs have very defined target audiences, namely those who actively seek out the kind of content that is on these sites and comes from these blogs. Especially if they're about science, technology or the like, the Web designer or blogger can always count on the target audience being interested in the field and, thus, coming with substantial prior knowledge. Certain things simply don't have to be explained to this target audience because they already know them.
The target audience on social media, on the other hand, can be
anyone. In the Fediverse, my target audience includes people who happen upon my posts on the federated timelines of their instances.
People who have one of my image posts boosted/reposted/renoted/repeated to them for whatever reason.
People who discover one of my image posts on their local timeline because someone on their instance has boosted it.
People who come across one of my image posts among the posts of someone whose profile they're checking out, but that user has only just boosted one of my image posts. Granted, I don't think anyone has ever boosted any one of my image posts because they're too intimidating for Mastodon, especially not , but it could always happen.
And, of course, people who happen to follow one of the hashtags I use, and that hashtag is not one about the topic (e.g.
#zrl=https://hub.netzgemeinde.eu/searchtag=OpenSimOpenSim/zrl
), but it happens to be one that I use as a filter trigger (e.g.
#zrl=https://hub.netzgemeinde.eu/searchtag=LongPostLongPost/zrl
). Or they're interested in all things alt-text, so they follow
#zrl=https://hub.netzgemeinde.eu/searchtag=AltTextAltText/zrl
, but I put that hashtag on my image posts as well.
Now, my image posts are about a general topic with which maybe one in over 200,000 Fedizens is familiar. The chances that someone who happens upon one of my image posts is familiar with the general topic are practically nil.
Thus, taking my target audience into account, I also have to take into account that at least parts of my target audience know
nothing about what my image posts are about. Absolutely zilch. Not even the basics.
For starters, this means I have to explain everything from the ground up. Look at the image posts I've linked to again. Open the content warnings again. Check the beginnings of the full image descriptions. Notice how much I have to ramble in order to only explain where the image was taken.
In real-life pictures, you'll probably never need more than 70 characters to name even the most obscure real-life location. In fact, you'll often get away with name-dropping because most people are familiar with the place or even a specific landmark anyway.
In my virtual world pictures, I can't get away with name-dropping. I have to go all the way and explain on which sim a landmark is, in which grid the sim is, that the grid is a virtual world based on OpenSim, what OpenSim is, what Second Life is, what grids are, what this particular grid is, why they're named grids, what regions are, what sims are, even what the Hypergrid is in order to make clear how my avatar got to that particular location in the first place. None of this can be expected to be known by everyone in the target audience.
Mentioning the place where I've created a picture often requires more characters than fit into Mastodon's alt text.
Likewise, most people should at least have a rough idea what the Capitol is and what it looks like. Or the Eiffel Tower. Or the Taj Mahal. Or a real-life cat, apart from the ways that cats look different from one another.
How many people, do you think, know right off the bat what the Sendalonde Community Library is and what it looks like Would you know without Googling
Or how many people, do you think, know right off the bat what an OpenSimWorld beacon is and what it looks like You think I could get away with name-dropping "OpenSimWorld beacon" in an image description It doesn't help that there are at least five standard variants from various versions, and some sim users customise them or build their own.
Or if I just simply wrote that there's a teleporter somewhere, how many people, do you think, would be sufficiently informed How many non-sighted people would be sufficiently informed
I could post a picture that includes a virtual version of the Eiffel Tower or the Golden Gate Bridge. Really, I could, I know where to find them. Blind or visually-impaired people may have a rough idea what they look like in the real world. People with bad mobile Internet for whom my image doesn't load may know exactly what these structures look like in the real world. But how are they supposed to know what the virtual models of these structures look like And how is anyone supposed to know where they are located
So, yes, I do take into account that parts of my target audience may not be fully sighted. That's why I'm doing all that in the first place. That's why I describe stuff that many people may deem unimportant.
See, I'm posting about 3-D virtual worlds. For many people, that's completely uninteresting. And I guess not few actually filter the word "metaverse" out for whatever reason.
But there may be people who have actually put high hopes into virtual worlds without knowing that virtual worlds already existed at that point. Then they heard about "the metaverse" dwindling away. And they were disappointed.
And then they come across one of my image posts. About a 3-D virtual world. And they're immediately on the edges of their seats in utter excitement. There are actually
existing virtual worlds Like,
right now That picture in that post is not an AI rendering, but it was
made in an actual, real, living 3-D virtual world!
If they read on, they'll discover on top of all that that the virtual world which my image shows is free and open-source. And decentralised. You know, like Mastodon. To the point where anyone can run their own instance. Or rather, grid. And connect it to all the other grids.
This is not what they've dreamed of. This
exceeds what they've dreamed of. They would never even have dared to
dream of something like this because it was so very unimaginable. And now it turns out it's very real. Oh, and it has been for 17 years already.
Now they're curious like you wouldn't believe. They want to know
everything about these worlds. They want to soak up any information about them they can get like a sponge. They want to explore these new and unknown worlds.
If they're sighted, they start by letting their eyes wander around the image. No matter what the image focuses on within the context of the post, they check out
everything, also because all that stuff looks much, much more realistic and much, much less cartoonish than those official previews from Meta's Horizons which all of a sudden look even more ridiculous and pathetic in comparison.
I could post a virtual cat picture. No, really, I could. They wouldn't just look at the cat and go, "Aww, how cute." They would check out the surroundings and the background just as well, even though the surroundings and the background don't really matter. But to them, they
do matter because it's such an exciting new world.
Now, if these people were blind or visually-impaired, one might argue that they wouldn't be nearly as interested in such a highly visual medium as a 3-D virtual world. I wouldn't count on that. Blind or visually-impaired people might be just as curious about these new worlds as sighted people and just as eager to go explore these new worlds. They might be just as eager to know what
everything in
any of my pictures looks like, regardless of context. But they need my help.
In general, I don't buy the notion that blind or visually-impaired people can easily be satisfied by mentioning stuff that's in an image, that you can get away with mentioning and name-dropping and nothing more.
I always expect blind or visually-impaired people to be all like, "Yeah, that's fine, but
what does it look like"
The more curious they are about something, the more eager they are to know what it looks like. And the newer and/or more obscure something is to them, the less likely they are to already know what it looks like. And the virtual worlds I write about are
completely unknown to them at this point.
All this ties in with the next point. And that's to avoid technical terminology and jargon. If you can't, you have to explain it in such a way that even the most casual audience can fully grasp it with no problems and without having to look up anything themselves. Anything less is ableist.
I can't fully avoid it. If I tried, I would have to constantly write around it in such a way that my image descriptions would get even more confusing.
So I have to explain it. All of it. Down to the very basics. Down to something that I can expect everyone in the Fediverse to know and be familiar with.
Seeing as how very niche and obscure and special-interest my virtual worlds topic is, and how many unfamiliar elements will inevitably appear in both my images and their descriptions, I have a whole lot to explain. Even explaining Second Life would be easier than explaining OpenSim because explaining OpenSim
requires a previous explanation of Second Life.
Anyone who believes that I can cover all that in significantly under 500 characters in a way that absolutely everyone can understand right away with no special prior knowledge, without Googling, without asking me questions afterwards, I have a bridge to sell you.
Then there is how people should be described. I only post photographs of real-life people if they're part of a known, established meme template.
Otherwise, virtual avatars take their place. I have started to avoid posting avatars after I had learned about the eye contact trigger. The issue here is two-fold: Mastodon doesn't hide images behind content warnings. And Hubzilla can't make Mastodon blank out sensitive images. So any sensitive image which I post on Hubzilla will always end up immediately in plain sight for everyone, at least for those who don't have filters that catch these image posts.
When I couldn't really avoid having an avatar in an image, I showed the avatar from behind so that the face was completely invisible. It still substantially increased the effort of description and the length of the description, but it was mostly safe from triggering people, and I didn't have to go through the effort of describing the avatar's face and mimics.
Now I'm also on (streams) which can make Mastodon blank out sensitive images using one out of two specific hashtags. So there's no stopping me posting pictures with avatars in them and even showing their faces.
Describing avatars can end up tedious, though, much more tedious than describing comparable real-life humans, applying the same description rules.
First of all, there's no skimping. There's no "it doesn't matter what the avatar looks like". It
always matters to someone what the avatar looks like. Again, exciting new world to explore and all. See above. And if the image is a portrait of the avatar, it matters
very much what the avatar looks like.
Next, there's much less that doesn't have to be explained because it's a given. For example, there's quite a bit of debate on whether or not to mention a real-life human's skin tone, and if so, how. The general consensus is usually to limit it to "light, medium light, medium, medium dark, dark". Nothing much more is necessary except maybe freckles or something. But a real-life human skin always looks like a real-life human skin. That's a given, isn't it
In a virtual world, it isn't a given. In case you were unaware: The skin of a human avatar in a virtual world can be a plain, solid tint on a 3-D mesh. It can be photo-realistic. It can be one or multiple actual photographs, e.g. because the whole avatar was generated from photographs and a 3-D laser scan of a real-life human. And it can be anything in-between.
Look up early promo pictures showing Mark Zuckerberg's personal Horizons avatar. Then search Flickr for Second Life pictures. There are
worlds between these two. You may be tempted that the Second Life pictures were generated by Stable Diffusion rather than rendered in a virtual world.
So if I just mentioned the skin tone, how would you, without looking at the image, be able to know what exactly the skin looks like, whether it's a cartoonish solid colour or a set of photo-realistic textures
This goes on and on.
In fact, when I describe an avatar, I also have to
explain the avatar. With that, I mean I have to explain the avatar system in Second Life and OpenSim and how it works. Unlike in many other virtual worlds, they aren't single-piece "monoblock avatars" la Ready Player Me that can only be varied in overall size, if at all. They are highly customisable in-world with no external tools, and they are highly modular. This explains why avatars looks like what they look like. And seriously, only experienced Second Life and/or OpenSim users even expect this level of in-world customisability and modularity to be possible in the first place.
If I were to go all the way, I'd even explain what mesh is, what rigged and fitted mesh is, how it works, how it differs from the "classic" means of customising Second Life and OpenSim avatars, what Bakes-on-Mesh is and what advantages Bakes-on-Mesh has over what was used before it was there.
In , I omitted the latter, and I also forgot to mention that the skin textures are near-photo-realistic, and the skirt texture is photo-realistic. Maybe I'll edit the image description preamble in the post and add both.
But what increased the effort of describing the avatar are the OSgrid logo and the logo for OSgrid's 17th birthday. Nobody who isn't an active OpenSim user knows what either looks like without seeing it. So I had to describe both in detail, also because the latter includes three independent lines of text which I had to transcribe.
This brings me to the last point: text in images and its transcription.
There is the rule that says that any and all text within the borders of an image must be transcribed 100% verbatim.
In the Fediverse, this rule is usually applied to screenshots, Mastodon screenshots, pictures of newspaper clippings or photographs of posters. This is usually manageable except when the text in a newspaper clipping is over 1,500 characters long.
The various alt-text guides out there that mention text transcripts are never about social media, what with how the puny character limits for alt-text in commercial social media makes transcribing text difficult to nigh-impossible.
They're always about static Web sites and blogs. This mostly means commercial Web sites, professional scientific Web sites, professional scientific news blogs or professional Websites or blogs about technology.
So some of the guides know the occasional edge-case. A graph or a flowchart is a near-edge-case due to its complexity. There are specific defined rules on how to describe graphs, although they may differ. One solution is to fully describe a flowchart in the main text. As for graphs, it's often considered okay to sweep the exact details under the rug unless every last figure in the graph absolutely matters. A complex table is very much an edge-case because a full transcript of it in alt-text would be too long. It is not as clear how exactly a table has to be handled, but it is being discussed.
Most importantly, both of these edge-cases are very well-known. They happen to professional Web designers and bloggers over and over and over again, and they have been happening for many, many years.
My images are edge-cases, too. But they're completely unknown edge-cases, and they're edge-cases in ways that are completely unknown to all those who write alt-text guides and who work on standardising accessibility.
I often have lots of bits of text in them in various places and ways. Signs, posters, box labels, vendor labels, teleporter labels, flags, neon signs, prints on clothing, the writing on an OpenSimWorld beacon, what-have-you. All this could be in my images. And a lot of it has already been in my images, sometimes in great numbers.
This alone is an edge-case. An edge-case which none of the alt-text guides handle because none of them are even aware of its very existence. The same situation could occur in real-life photographs, but real-life photographs practically always focus on one specific element or a few, and everything else is recommended to be treated as non-existent in alt-text. So part of my edge-case is that this focus on one specific text-less element is simply not there in most of my images.
I can solve it gracefully by
actually transcribing absolutely everything verbatim, 1:1, 100%. It usually doesn't go into the alt-text anyway unless it isn't too much. It definitely always goes into the long, full, detailed description in the post, in case you've forgotten that I usually describe each one of my images twice. Only if it's a maximum of three or four short bits of text, and I don't have to describe too much in the alt-text otherwise, only then the transcripts go into the alt-text as well. But I never put some transcripts into the alt-text and some not. Either all of them or none. Regardless, all of them always go into the full description in the post.
What makes my unhandled edge-case even more of an unhandled edge-case is that text often tends to be illegible in my images.
It may be too small to be readable at the resolution at which I post the image, e.g. two or three pixels high. It may be so small that it can't be identified as text at that resolution. It may be so tiny that it's practically invisible, e.g. less than half a pixel high. But still, it's text, and technically, it's within the borders of the image. Or it could be partially obscured, e.g. a poster with a lot of text of it, but 30% of the front area of the poster are covered by the trunk of a tree in front of it.
No alt-text or image description guide that deals with text transcripts
ever deals with unreadable text. They don't declare that it must not be transcribed. They don't declare that it needn't be transcribed. They don't declare explicitly that it must be transcribed.
The very existence of illegible text in an image completely eludes absolutely everyone who writes guidelines for image descriptions.
Given the lack of special treatment requirements of illegible text, I have to suppose that illegible text has to be handled the exact same ways as legible text. It has to be transcribed verbatim. And so I transcribe it verbatim.
Now you may say that that shouldn't even be possible. If that text is illegible, I can't read it either, so how should I be able to transcribe it
But I
can read it.
See, when I describe an image, I don't just simply look at the image at the resolution at which I intend to post it. I look at
the real deal in-world. I don't look at the representation of a sign in my image.
I look at the sign proper, in the virtual world, right where the sign is installed.
I can walk around with my avatar. I can move the camera independently from the avatar. I can move around obstacles such as tree trunks. I can close up on everything.
And all of a sudden, that tiny white blot of 4x3 pixels on which the writing can't even be made out as such unfolds before my eyes as a sign with a front texture of 1024x768 pixels with half a dozen clearly readable lines of text.
And all of a sudden, I can read that text behind that tree.
In fact, I'd say that even
fully sighted people can profit from how I transcribe everything, legible or not. If even the most eagle-eyed Fediverse users can't read something in one of my images, they still know what's written there because I can read it, and I tell them what's written there.
Lastly, all this is just me following existing guidelines and rules, from explaining jargon and unknown words to describing people to transcribing text to taking my audience into account and giving them what they require. It's just that my images are extremely obscure edge-cases for which no officially defined and agreed-upon exceptions exist and probably never will.
#
Long #
LongPost #
CWLong #
CWLongPost #
OpenSim #
OpenSimulator #
Metaverse #
VirtualWorlds #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Transcript #
Transcripts #
Transcription #
Inclusion #
Inclusivity #
A11y #
Accessibility #
AccessibilityMattersWalz has a long history with China. But hes notpro-China.
Vice President Harriss running mate taught in China in 1989, sparking an enduring interest in the country but also leading him to criticize its human rights abuses.
The more complex and/or obscure an image is, and the more description and explanation it requires for casuals to understand it, the harder it is to describe and explain it in a way that makes the image accessible to absolutely everyone. But exactly this should always be the goal: make your images accessible to absolutely everyone with no exception.
At some point, you will inevitably have to throw
someone in front of the bus to cater to someone else.
It's impossible to describe the kind of images I post in a way that's short enough for absolutely all blind or visually-impaired users to comfortably have the sole remaining image description read to them by their screen readers
and informative and explanatory enough for casuals and especially neurodiverse people to fully understand it. The former will require
one image description in the alt-text that's no longer than 200 characters plus
no additional image description in the post. The latter will require everything, absolutely everything to be explained and described from the absolute basics up to details like single elements in the image which means tens of thousands of characters, which in turn will have to go into the post. I guess you can see the contradiction.
If the image that you want to post shows something sufficiently niche and obscure, because at least one group of disabled people will inevitably have to suffer from at least one image description that's completely useless and totally inconvenient to them.
#
Long #
LongPost #
CWLong #
CWLongPost #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Inclusion #
Inclusivity #
A11y #
Accessibility #
AccessibilityMattersDanie van der MerweI get who your target audience may be, but the alt-text is really intended for the sight impaired audience, not for visually fine readers
Yes, but I didn't want to taint my track record of having described all my images since summer 2023 by not describing these two, regardless of whether or not descriptions were actually necessary for my specific target audience. It's bad enough that I didn't have the time to apply my own high standards for image descriptions to these two images.
That means any sight impaired user who follows you or follows a hashtag you choose to use, will in deed be seeing and trying to read your posts
My posts, yes, unless I limit who receives and therefore is permitted to read them.
Comments on Hubzilla should be expected to work differently from replies on Mastodon. They should only go to the author of the post at the beginning of the thread and be spread to all those who have a concept of conversations, and who have the post in their timelines/streams. In addition, they should go to those whom they mention and have them notified.
Curiously, at least when the post is fully public, my comments seem to go out to almost all my connections. The only exceptions are the Threadiverse unless I comment on something in the Threadiverse, those who filter out something in the comments and those on at least Hubzilla who haven't received the post and all comments in the branch before my comment, if any.
Unfortunately, on Hubzilla, whoever sends the post has the exclusive right to define the audience of the entire thread. I can't restrict the audience of a comment to a post to fewer actors than the audience of the post itself.
So in my case I try keep my alt-text descriptions shorter and more concise, specifically for screenreaders.
For my own images, I always feel that image descriptions in not significantly more than 200 characters wouldn't even start to actually describe anything. They'd stop at what the image is about. But they would barely, if at all, get to mentioning at least the most important elements that are in the image. And they certainly wouldn't get to describing what any of them look like. And I'm still not talking about explanations, which shouldn't go into alt-text anyway, and text transcripts.
I feel that even with the additional, full, long, detailed image description in the post as a "backup", such a short description in the alt-text would be so insufficient that they'd be half-useless and border on being ableist.
If you want to know what my image posts usually look like: with exactly 1,500 characters of alt-text, a bit over 1,400 of which are image description, plus over 60,000 characters of image description plus explanations plus text transcripts in the post. Unfortunately, my standards have advanced since then, and it is outdated now. But it's still probably the image post with the longest image description in the whole Fediverse.
I could also link to my most recent image post. It's more up-to-date, i.e. it avoids two mistakes I've made in the post that I've linked to. But it's pretty non-standard: It has two images which I mostly describe together in the post in what's essentially one very long common preamble of almost 11,000 characters, followed by individual descriptions for the images of a bit over 2,800 and a bit under 6,600 characters. This is because I was done describing the first image when I decided to add a second image which has a lot in common with the first one. The alt-texts have 1,500 and 1,499 characters respectively, but they use more characters to explain where the long descriptions can be found.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Mastodon #
Hubzilla #
Streams #
(streams) #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Inclusion #
Inclusivity #
A11y #
Accessibility #
AccessibilityMatters