Chillingly, most people who develop long COVID did not have particularly vicious cases of the virus initially.
Thats in part because so many more people experience a mild form of COVID rather than a severe one.
(Across most studies, long COVID risk does increase with the severity of the initial infection.)
And each time people become reinfected with the virus, theyre at risk of developing long COVID,
even if they didnt experience it previously.
The authors note that studies on recovery from long COVID are sparse and inconsistent.
But those that have closely evaluated individual manifestations of the virus have found recovery rates to be fairly low at one year,
and only 7% to 10% fully recovered after two years.
For millions and millions of people, the debilitating effects of long COVID are just that.
The economic toll is its own story.
A Census Bureau Household Pulse Survey in 2022 found that between
2 million and 4 million working-age Americans were out of work because they were sidelined with the effects of long COVID.
Meanwhile 20% of people with long COVID surveyed by the United Kingdoms Trades Union Congress said they were not working.
Another 16% were working reduced schedules.
The $1 trillion estimated annual global economic hit involves Organization for Economic Cooperation and Development (OECD) countries due to
reductions in quality of life and labor force participation,
the Nature Medicine review says.
And that price tag does not factor in the direct costs of healthcare, another likely category of deep financial fissure.
Al-Aly and his co-authors are pleading for governments,
especially U.S. health agencies,
to dramatically upgrade their levels of activity to investigate long COVID,
learn more about its mechanisms and pathways,
and develop vaccines that better block infection.
Theyre also pushing for large-scale platform trials to test multiple drugs simultaneously,
so we can quickly learn what works and what doesnt.
They have an ally in Congress.
U.S. Sen. Bernie Sanders recently introduced the Long COVID Research Moonshot Act,
a bill that would guarantee $1 billion per year for a decade to the National Institutes of Health
to support long COVID research,
the urgent pursuit of treatments,
and the expansion of care for patients across the country.
Sanders news release announcing the act puts the long COVID count in the U.S. at
22 million, including a million children.
Among other things, the bill would require NIH to establish a long COVID and a to speed up clinical trials,
and to make any subsequent treatments so that every patient could receive it.
The legislation that we have introduced finally recognizes that long COVID is a public health emergency, Sanders said.
Congress must act now to ensure treatments are developed and made available for Americans struggling with long COVID.
If it accomplishes nothing else, Sanders proposal may help lift the veil on long COVID in the country and around the world.
Its a topic that has been largely pushed into the shadows,
as governmental agencies
and official policy
tried to construct a reality in which COVID in general was just no longer a significant issue.
That was never the case, and its certainly not the case now.
The reality is otherwise, Al-Aly says.
As the virus mutates into new strains and continues to affect millions, the long tail of COVID is once again forcing itself to center stage.
Multiple COVID subvariants, collectively nicknamed ,
are powerfully present in the U.S.,
and reports from California indicate that some patients are complaining of so strong it feels like theyre
swallowing razors or broken glass,
according to the Los Angeles Times.
Topol, meanwhile, says the Sato Lab in Japan has characterized one of the newest COVID strains,
KP.3.1.1, in a preprint as having
the most immune evasion and infectivity of any of the variants
derived from previous powerful iterations of the JN.1 strain, which was prominent last winter.
Although the Centers for Disease Control and Prevention says that severe outcomes and deaths from COVID have decreased substantially overall from the pandemics early days,
wastewater data shows viral activity is high nationally
and COVID-19 infections are growing or likely growing in 35 states.
More COVID infections mean more cases of long COVID.
And is already exacting an enormous toll on both the people and economies of the world.
Those are words you arent hearing from many government bodies.
But, the researchers say, the evidence tells the story.
Despite the dire impact of long COVID on individuals and society,
I fear that many are still unaware of the danger, says Akiko Iwasaki, professor of immunology at Yale School of Medicine and co-lead investigator of the universitys COVID-19 Recovery Study.
There is an urgent need to provide proper diagnosis and treatment for people living with long COVID.
The authors lay out a number of preventive policy , including
increased use of masking,
improved ventilation systems
and a vaccination program that pairs COVID shots with season flu shots to extend their reach to improve uptake.
But theres a genuine question as to whether enough folks are paying close attention for any of this to matter.
As the authors point out,
a survey found that as of last August,
one-third of American adults still hadnt even heard of long COVID.
In reality, long COVID was identified and defined in the first full year of the pandemic, 2020,
and it has been increasing its case count ever since.
This form of COVID is particularly perilous because, for many people, its symptoms may last years
(or a lifetime)
and their effects may trigger all sorts of associated problems and costs.
Long COVID affects nearly every organ system, the review notes,
including the cardiovascular, immune, gastrointestinal and reproductive systems.
While more than 200 symptoms have been identified, common symptoms include
memory problems,
difficulty concentrating,
fatigue,
heart palpitations,
chronic cough,
shortness or breath
and recurring headaches.
is a $1 trillion problem with no cure.
Experts plead for governments to wake up
For months, governmental officials around the world have appeared to want to forgo discussing the specter of long COVID.
As a new review makes clear, that is wishful thinking
and the latest COVID variants may well kick long COVID into overdrive,
a scenario that researchers and experts have been warning about for some time.
I think they (government agencies) are itching to that COVID is over and that long COVID does not exist, says Ziyad Al-Aly,
director of the Clinical Epidemiology Center at Veterans Affairs St. Louis Health Care System -- and lead author of the review.
It is much more pleasant to pretend as if emergency department visits and hospitalizations havent been rising sharply this summer.
Al-Aly and several other top researchers lay out a difficult truth:
Long COVID has already affected an estimated 400 million people worldwide,
a number the authors say is likely conservative,
at an economic cost of about $1 trillion annually
equivalent to 1% of the global economy.
Moreover, the of a person being hit with long COVID rises with repeated infections of the virus itself,
and recent COVID activity has experts watching closely.
As review co-author noted in a recent blog post,
the current COVID incursion is ramping up quickly,
with one modeler estimating
900,000 new infections per day in the U.S. alone.
The new significant wave, Topol said via X.
Its hard to believe the we are well into our fifth year of this virus and not taking the requisite steps to gain control.
Na ja, es gibt Leute, die von Facebook-Alternativen die Finger lassen wegen der bsen Assoziationen mit Facebook von Cambridge Analytica bis hin zur aktiven Untersttzung des Genozids an den Rohingya.
Aber es ist schon nachvollziehbar: Wenn man sich erst auf Twitter und mglichst originalgetreue Twitter-Klone eingeschossen hat, kommt man mit Sachen, die vllig anders funktionieren, nicht mehr so leicht klar und mu erst umstndlich komplett umdenken.
Zunchst mal: Friendica ist kein Klon des 2024er Facebook. Auch keiner des 2022er Facebook. Friendica wurde schon 2010 entwickelt, und zwar nicht als Klon, sondern als Alternative. Auch nicht als gleichwertige Alternative, sondern als bessere Alternative.
Ein paar Dinge an Friendica, die typisch Facebook und untypisch Twitter/Mastodon sind:
- Verbindungen sind standardmig gegenseitig. Du hast nicht Followers auf der einen Seite und Followed auf der anderen. Du hast nur das, was auf Facebook "Freunde" sind, nur ohne diesen kreuzdmlichen Namen, den sich so nur Amis ausdenken knnen.
- Nicht alles ist ein Post. Wenn es eine Antwort ist, ist es kein Post, sondern ein Kommentar. Jeder Thread hat genau einen Post ganz am Anfang, alles andere sind Kommentare. Genauso funktionieren auch Tumblr und Blogs.
- Du hast fr Kommentare andere Eingabefelder als fr Posts.
- Du kannst Posts gezielt an Listen schicken. Nur da die Listen nicht Listen heien, sondern Kreise, und im Vergleich zu Mastodon hundertmal einfacher zu managen sind. Inzwischen wohl sogar noch einfacher als auf Facebook selbst.
- Kein superknappes Zeichenlimit. Facebook versteht sich nicht als Mikroblog. Friendica auch nicht.
- Mastodon schlgt dir keine Kontakte vor. Friendica meines Wissens schon. brigens tut das auch alles, was von Friendica abstammt.
In Teilen ist Friendica aber noch nher an Blogs als an Facebook:
- berhaupt kein Zeichenlimit.
- Textformatierung. Und zwar das meines Wissens (ich bin schon seit Mitte der 2010er aus Friendica raus) das volle Programm, das auch HTML untersttzt. Friendica nutzt BBcode und kann optional auch Markdown nutzen.
- Im Text eingebettete Bilder. Beliebig viele.
- Eingebauter "Cloudspeicher" fr die Bilder. Du weit, wo deine Bilder liegen, weil du die da selber hingetan hast.
- Titel.
- Zusammenfassungen. Also das, was Mastodon fr CWs zweckentfremdet hat, aber immer noch als Zusammenfassungen. Geht zwar jetzt nur noch per BBcode-Tags, aber geht.
- Und auch hier wieder: Antworten sind keine Posts, sondern Kommentare. Und dafr gibt's eine eigene Eingabemaske.
Wer von Twitter und dann von Mastodon kommt, wird sich also von Mastodon nach Friendica mehr umgewhnen mssen als von Twitter nach Mastodon. Aber: Mit "Wieso knnen die das nicht mehr wie Mastodon machen, wo es doch im Fediverse ist" kommt man nicht weiter.
Brandaktueller englischsprachiger Lesetip von , die bisher quasi nur Mastodon kannte, dann aber Friendica ausprobiert hat und so geflasht ist, da sie gar nicht mehr weg will: .
Auch lesenswert ist von von Ende Oktober 2022, als Twitter gerade von Elon Musk bernommen worden war.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
FriendicaGiven that, I'm not sure how long a screen reader takes to read 40,000 character alt-text per image
Don't make the old mistake of taking "alt-text" and "image description" for being mutually 100% synonymous. Alt-text can be more than image description, and an image description does not always go into the alt-text.
The 40,000 characters don't go into the alt-text. If they did, Mastodon would chop 38,500 characters off and throw them away, as would Misskey and all forks of both.
The 40,000 characters go into the post text body. The toot, if you want. Where you have a limit of 500 characters, I have none at all. On Hubzilla, which supports in-line embedding of images, they go directly below the images.
Into the alt-text goes a second, different, separate, independent, much shorter image description.
Each one of my images usually has two descriptions. One full, long, detailed description with all explanations and transcripts in the post text body. One significantly shorter description in the alt-text.
The long description in the post text body is there to deliver all necessary information without Mastodon, Glitch, Ecko, Hometown, Misskey, Firefish, Iceshrimp, Sharkey, Catodon etc. etc. putting the axe to it at the 1,500-character mark.
The short description in the alt-text is there to satisfy those Mastodon users who absolutely demand there be an actually sufficient image description in the alt-text, no matter how. Even if there already is a much longer image description in the post.
After all, they can't see the image description in the post. The post is hidden behind a content warning because it is longer than 500 characters. I always put all posts that exceed 500 characters behind a Mastodon-style content warning that starts with a StatusNet/Friendica/Hubzilla-style summary of the post, followed at least by "CW: long post (
<number of characters in the post, raw count> characters)".
Sometimes I take the extra effort upon me to specify how many of these characters are actual post, and how many are image description, which makes it clear right away that there is an image description behind that CW.
What I always do is to the alt-text the extra information that a longer, more detailed image description with explanations and text transcripts can be found in the post.
Depending on how much room the short image description leaves me in the alt-text, I also add where exactly the long image description can be found. And that's two different locations, depending on whether you're on something that supports in-line images (e.g. Pleroma, Akkoma, Friendica, Hubzilla, (streams)) or not (e.g. Mastodon, Misskey and their forks).
The rest of us can just look at the image so we won't use alt-text as we read the post that accompanies the image for context.
On Mastodon, you'll only see the image and a content warning at first. The content warning starts with a summary, unusual for Mastodon, but standard on Hubzilla. Then it warns you about the post being long, including how long the post is. That'll most likely be a five-digit number.
When you open the content warning, you'll see a few hundred or a very few thousands characters of actual post text. Then, after two blank lines, comes the long image description, still in the post. And that's usually several tens of thousands of characters. If the post is recent enough, the image description is announced by a HTML headline so even those who don't have images in the post know where the actual post text ends and the long image description begins.
If you want to know from personal experience what an image post of mine looks like on Mastodon, search for the hashtag #
UniversalCampus. The second or third post from the top is a good example. It has a long image description in the post text body of not 40,000, but over 60,000 characters, my longest to date. The alt-text is precisely 1,500 characters long, 1,402 of which are image description.
By the way, is what the self-same post looks like on Hubzilla.
An example with multiple images can be found under the hashtag #
OSFest2023. You have to scroll down until you find a post whose content warning starts with the summary, "Classic creations by Arcadia Asylum a.k.a. Lora Lemon/Aley at OpenSimFest 2023". This one is hopelessly outdated now, by the way.
is the Hubzilla original.
#
Long #
LongPost #
CWLong #
CWLongPost #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMetazu doof, wenn man eine Suchmaschine nicht mal richtig bedinen kann.
Warum sollten sie nach etwas googlen, wovon sie nicht mal ahnen, da es vielleicht existieren knnte
Und solange du ihnen nicht ausdrcklich sagst, da es noch andere Sachen als Mastodon gibt, die auch im Fediverse sind und sich auch mit Mastodon verbinden, bersteigt das doch komplett deren Vorstellungsvermgen.
Wrdest du von dir selbst aus auf die Idee kommen, nach dezentralen, fderierten virtuellen 3-D-Welten auf Basis freier, quelloffener Software zu suchen, wenn ich dir nicht sagen wrde, da es sowas gibt Jedenfalls ist das Fediverse fr die, die noch nie davon gehrt haben, noch abwegiger und noch unvorstellbarer. Das ist der vierdimensionale Hyperwrfel des Internet.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Mastodon #
NichtNurMastodonGerade die Unis, ich verstehe nicht, wie man sich mit Mastodon so selber geieln kann Die legen sich selber Hand- und Fufesseln mit so einem begrenzten Dienst an, das man schreiend im Kreis rennen mchte.
Die wissen nicht, da es noch was anderes auer Mastodon gibt.
Weil ihnen nicht gesagt wird, da es noch was anderes auer Mastodon gibt.
Weil die ganzen Mastodonten, die die Unis ins Fediverse holen wollen, ihnen davon nichts erzhlen.
Weil jeder zweite von denen das selbst nicht wei und die andere Hlfte andere Grnde dagegen hat.
Und andere Quellen haben die Unis nicht. Anscheinend blderweise nicht mal die TUs.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Mastodon #
Unis4Mastodon #
UnisInsFediverse #
NichtNurMastodonIch bekomme das in den normalen Unterhaltungen mit, das da nur Mastodon als Echt angesehen wird.
Mir hat mal einer ins Gesicht gesagt, da Mastodon das absolut einzige Fediverse-Projekt ist, das "feature-complete" ist, und das ist eine Tatsache.
Ich htte ihm am liebsten eine ganze Liste von Hubzilla-Features vor den Latz geknallt, die teilweise auch noch unter Beweis gestellt und ihn gefragt, ob Mastodon das auch hat.
Aber die Leute "lernen" ja im allgemeinen im Zuge ihres Fediverse-Onboarding erst, da es nur Mastodon gibt, dann, da das Fediverse nur Mastodon ist, und dann, da Eugen Rochko das Fediverse und ActivityPub erfunden hat, Mastodon zuerst da war, Mastodon der Fediverse-Goldstandard ist und alles, was nicht Mastodon ist, nichts weiter als eine Alternativ-UI fr Mastodon ist.
Aber auch bei einer Kommunikation mit dem Aufruf fr Unis ins "Mastodon" und dann die Weigerung, dafr Fediverse einzusetzen.
La mich raten.
"'Mastodon' und 'Fediverse' ist doch genau dasselbe. Das meint beides dasselbe. Da gibt's keinen Unterschied."
"Die Leute und die Unis kennen weit eher Mastodon als das Fediverse. Mit dem Fediverse braucht man ihnen nicht kommen, das verstehen die nicht. Mastodon kennen die eher und verstehen die eher."
"Das darf fr die Unis nicht zu kompliziert sein. Es ist einfacher, sie direkt nach Mastodon einzuladen, als ihnen erst das ganze Fediverse zu erklren." Als wenn es an jeder Uni nur technikferne Dumpfbolzen gbe.
"Die Unis brauchen ja eh nur einen Ersatz fr Twitter." Und Akkoma, Firefish, Iceshrimp etc. sind kein Twitter-Ersatz Ach so, habt ihr nie von gehrt. Und die Unis kommen auch ganz bestimmt mit maximal 500 Zeichen aus Und mit der totalen Verweigerung der obersten Mastodon-Heeresleitung, HTML-Rendering zu untersttzen, weil das ja nicht Microblogging wre
Auerdem glaube ich, wenn eine TU oder TH erst einen eigenen Hubzilla-Hub aufgesetzt hat, geht denen einer ab angesichts der Mglichkeiten, die sie da aus einer Hand und an einer Stelle haben. Womglich kriegt Mario dann sogar PRs von Professoren oder Studis rein zwecks Verbesserung.
"Erstmal Mastodon, dann sehen wir weiter." Nee, dann sehen wir nicht weiter. Die meisten Privatanwender, die auf Mastodon landen, kommen da innerhalb des Fediverse kaum wieder weg, weil es keine Ein-Klick-Migration nach wohin-auch-immer fr Doofe innerhalb der eigenen Lieblings-Mastodon-App gibt. Und Institutionen wie Unis mit eigener Instanz sind noch sehr viel unflexibler. Selbst wenn die woandershin umziehen wollten, wre das ein Proze von Monaten oder Jahren, bis das alles durch wre.
Wie die dann mastodon mit Flipboard, Wordpress udn spter mal threads in einklang bringen wollen, ist mir schleierhaft.
Gar nicht.
Das bloe Konzept, die bloe Idee, da irgendwas anderes, was nicht Mastodon ist, mit Mastodon verbunden sein knnte, wird ihnen vollkommen fremd sein. Die Unis werden so onboarded wie die vielen privaten Twitter-Flchtlinge: Es gibt nur Mastodon, und Mastodon ist innen drin dezentral, aber nach auen ein eigener Walled Garden. Genau das Bild, das Rochko und die gGmbH stndig zu vermitteln versuchen.
Ich glaube, das drftest du selbst schon oft genug erlebt haben, wie Mastodon-Nutzer Ziegelsteine in ihre Hosen gekackt haben, als sie durch dich unerwarteterweise und auf die harte Tour das komplett Unvorstellbare erfahren haben: Das Fediverse ist nicht nur Mastodon. Mastodon ist auch noch mit anderen Sachen verbunden. Und ja, das ist normal und soll so.
Nur knnen einzelne Privatleute in Sekundenschnelle auf diese Erkenntnis reagieren. An einer Uni mu die Erkenntnis selbst erst ber verschiedene Leute und durch diverse Abteilungen und Gremien und so gehen, bis das auch wirklich alle wissen, die damit irgendwie zu tun haben. Das alleine kann Wochen dauern, wenn nicht Monate. Eine Reaktion darauf kann noch lnger dauern.
Und wenn dann einer von auerhalb von Mastodon mit der Uni-Instanz zu interagieren versucht, und das ist nicht exakt genauso, wie wenn jemand anders auf Mastodon das tut, dann starren doch die Verantwortlichen darauf wie die Rehe ins Fernlicht und wissen weder, was da gerade passiert, noch was sie tun sollen.
Es wird ja schon schwierig genug, Uni-Instanzen aktuell zu halten. Vielleicht nicht an einer TU mit notgedrungenermaen fhiger IT, weil es peinlich wre, wenn die eigenen Informatik-Ersties mehr Plan haben als die hauseigene IT. Aber berall sonst besteht das Einmal-installieren-und-nie-updaten-Risiko.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Mastodon #
Unis4Mastodon #
UnisInsFediverse #
NichtNurMastodonWell... I don't think anyone's necessarily expecting perfection. They just expect a reasonable and deliberate effort.
They may not expect perfection all over. But they expect full accessibility for
themselves.
There are some guidelines out there from WGBH and other sources that can help decide what is and isn't necessary to have a well-described image.
Trust me, I know these guidelines. I follow them to a tee as far as that's possible for me. They have influenced me greatly and still do.
Just recently, I've learned two new things from such a guide. One, dimensions must always be given as relative to a size that people, especially blind or visually-impaired people, are familiar with, e.g. human body height. Two, colours must always be described based on a specific selection of basic colours plus brightness/darkness plus saturation. Blind people have no concept of "cerise" or "crimson" or "mocha". These two points rendered all my previous image descriptions obsolete.
But my images are extreme edge-cases that put these guidelines and the general image accessibility rules to the test.
Let me give you a few examples.
First of all, in order to keep an image description, especially in alt-text, short and concise, the guides tell me to focus on what's important in the image in the context of the post, describe that and sweep everything else under the rug because it's unimportant.
This advice completely failed because there are
several dozen singular elements in the image that are important within the context of the post. I did describe everything in the images, but only a tiny fraction of the 40,000-character description of the first image is about the surroundings.
It failed even more and especially . In these cases, I wanted to show
the whole scenery with everything in it. Nothing was more or less important in the images within the context of the post than anything else. Thus, I had to describe everything because, in a sense, everything was important.
It also failed . Again, I wanted to show the whole scenery. But even if I had wanted to show only the central element of the image, the building, I would not have been able to shrink the image description to a sensible size. The building is so complex and so strange and unusual at the same time that a highly detailed and very long description of it was absolutely necessary and inevitable. It's for this reason why I normally try to avoid having buildings in my images.
Another advice is to mind the target audience and write for them. This advice always comes in guidelines for alt-texts on static Web sites or blogs. It never comes in guidelines for alt-texts in social media.
Static Web sites or blogs have very defined target audiences, namely those who actively seek out the kind of content that is on these sites and comes from these blogs. Especially if they're about science, technology or the like, the Web designer or blogger can always count on the target audience being interested in the field and, thus, coming with substantial prior knowledge. Certain things simply don't have to be explained to this target audience because they already know them.
The target audience on social media, on the other hand, can be
anyone. In the Fediverse, my target audience includes people who happen upon my posts on the federated timelines of their instances.
People who have one of my image posts boosted/reposted/renoted/repeated to them for whatever reason.
People who discover one of my image posts on their local timeline because someone on their instance has boosted it.
People who come across one of my image posts among the posts of someone whose profile they're checking out, but that user has only just boosted one of my image posts. Granted, I don't think anyone has ever boosted any one of my image posts because they're too intimidating for Mastodon, especially not , but it could always happen.
And, of course, people who happen to follow one of the hashtags I use, and that hashtag is not one about the topic (e.g.
#zrl=https://hub.netzgemeinde.eu/searchtag=OpenSimOpenSim/zrl
), but it happens to be one that I use as a filter trigger (e.g.
#zrl=https://hub.netzgemeinde.eu/searchtag=LongPostLongPost/zrl
). Or they're interested in all things alt-text, so they follow
#zrl=https://hub.netzgemeinde.eu/searchtag=AltTextAltText/zrl
, but I put that hashtag on my image posts as well.
Now, my image posts are about a general topic with which maybe one in over 200,000 Fedizens is familiar. The chances that someone who happens upon one of my image posts is familiar with the general topic are practically nil.
Thus, taking my target audience into account, I also have to take into account that at least parts of my target audience know
nothing about what my image posts are about. Absolutely zilch. Not even the basics.
For starters, this means I have to explain everything from the ground up. Look at the image posts I've linked to again. Open the content warnings again. Check the beginnings of the full image descriptions. Notice how much I have to ramble in order to only explain where the image was taken.
In real-life pictures, you'll probably never need more than 70 characters to name even the most obscure real-life location. In fact, you'll often get away with name-dropping because most people are familiar with the place or even a specific landmark anyway.
In my virtual world pictures, I can't get away with name-dropping. I have to go all the way and explain on which sim a landmark is, in which grid the sim is, that the grid is a virtual world based on OpenSim, what OpenSim is, what Second Life is, what grids are, what this particular grid is, why they're named grids, what regions are, what sims are, even what the Hypergrid is in order to make clear how my avatar got to that particular location in the first place. None of this can be expected to be known by everyone in the target audience.
Mentioning the place where I've created a picture often requires more characters than fit into Mastodon's alt text.
Likewise, most people should at least have a rough idea what the Capitol is and what it looks like. Or the Eiffel Tower. Or the Taj Mahal. Or a real-life cat, apart from the ways that cats look different from one another.
How many people, do you think, know right off the bat what the Sendalonde Community Library is and what it looks like Would you know without Googling
Or how many people, do you think, know right off the bat what an OpenSimWorld beacon is and what it looks like You think I could get away with name-dropping "OpenSimWorld beacon" in an image description It doesn't help that there are at least five standard variants from various versions, and some sim users customise them or build their own.
Or if I just simply wrote that there's a teleporter somewhere, how many people, do you think, would be sufficiently informed How many non-sighted people would be sufficiently informed
I could post a picture that includes a virtual version of the Eiffel Tower or the Golden Gate Bridge. Really, I could, I know where to find them. Blind or visually-impaired people may have a rough idea what they look like in the real world. People with bad mobile Internet for whom my image doesn't load may know exactly what these structures look like in the real world. But how are they supposed to know what the virtual models of these structures look like And how is anyone supposed to know where they are located
So, yes, I do take into account that parts of my target audience may not be fully sighted. That's why I'm doing all that in the first place. That's why I describe stuff that many people may deem unimportant.
See, I'm posting about 3-D virtual worlds. For many people, that's completely uninteresting. And I guess not few actually filter the word "metaverse" out for whatever reason.
But there may be people who have actually put high hopes into virtual worlds without knowing that virtual worlds already existed at that point. Then they heard about "the metaverse" dwindling away. And they were disappointed.
And then they come across one of my image posts. About a 3-D virtual world. And they're immediately on the edges of their seats in utter excitement. There are actually
existing virtual worlds Like,
right now That picture in that post is not an AI rendering, but it was
made in an actual, real, living 3-D virtual world!
If they read on, they'll discover on top of all that that the virtual world which my image shows is free and open-source. And decentralised. You know, like Mastodon. To the point where anyone can run their own instance. Or rather, grid. And connect it to all the other grids.
This is not what they've dreamed of. This
exceeds what they've dreamed of. They would never even have dared to
dream of something like this because it was so very unimaginable. And now it turns out it's very real. Oh, and it has been for 17 years already.
Now they're curious like you wouldn't believe. They want to know
everything about these worlds. They want to soak up any information about them they can get like a sponge. They want to explore these new and unknown worlds.
If they're sighted, they start by letting their eyes wander around the image. No matter what the image focuses on within the context of the post, they check out
everything, also because all that stuff looks much, much more realistic and much, much less cartoonish than those official previews from Meta's Horizons which all of a sudden look even more ridiculous and pathetic in comparison.
I could post a virtual cat picture. No, really, I could. They wouldn't just look at the cat and go, "Aww, how cute." They would check out the surroundings and the background just as well, even though the surroundings and the background don't really matter. But to them, they
do matter because it's such an exciting new world.
Now, if these people were blind or visually-impaired, one might argue that they wouldn't be nearly as interested in such a highly visual medium as a 3-D virtual world. I wouldn't count on that. Blind or visually-impaired people might be just as curious about these new worlds as sighted people and just as eager to go explore these new worlds. They might be just as eager to know what
everything in
any of my pictures looks like, regardless of context. But they need my help.
In general, I don't buy the notion that blind or visually-impaired people can easily be satisfied by mentioning stuff that's in an image, that you can get away with mentioning and name-dropping and nothing more.
I always expect blind or visually-impaired people to be all like, "Yeah, that's fine, but
what does it look like"
The more curious they are about something, the more eager they are to know what it looks like. And the newer and/or more obscure something is to them, the less likely they are to already know what it looks like. And the virtual worlds I write about are
completely unknown to them at this point.
All this ties in with the next point. And that's to avoid technical terminology and jargon. If you can't, you have to explain it in such a way that even the most casual audience can fully grasp it with no problems and without having to look up anything themselves. Anything less is ableist.
I can't fully avoid it. If I tried, I would have to constantly write around it in such a way that my image descriptions would get even more confusing.
So I have to explain it. All of it. Down to the very basics. Down to something that I can expect everyone in the Fediverse to know and be familiar with.
Seeing as how very niche and obscure and special-interest my virtual worlds topic is, and how many unfamiliar elements will inevitably appear in both my images and their descriptions, I have a whole lot to explain. Even explaining Second Life would be easier than explaining OpenSim because explaining OpenSim
requires a previous explanation of Second Life.
Anyone who believes that I can cover all that in significantly under 500 characters in a way that absolutely everyone can understand right away with no special prior knowledge, without Googling, without asking me questions afterwards, I have a bridge to sell you.
Then there is how people should be described. I only post photographs of real-life people if they're part of a known, established meme template.
Otherwise, virtual avatars take their place. I have started to avoid posting avatars after I had learned about the eye contact trigger. The issue here is two-fold: Mastodon doesn't hide images behind content warnings. And Hubzilla can't make Mastodon blank out sensitive images. So any sensitive image which I post on Hubzilla will always end up immediately in plain sight for everyone, at least for those who don't have filters that catch these image posts.
When I couldn't really avoid having an avatar in an image, I showed the avatar from behind so that the face was completely invisible. It still substantially increased the effort of description and the length of the description, but it was mostly safe from triggering people, and I didn't have to go through the effort of describing the avatar's face and mimics.
Now I'm also on (streams) which can make Mastodon blank out sensitive images using one out of two specific hashtags. So there's no stopping me posting pictures with avatars in them and even showing their faces.
Describing avatars can end up tedious, though, much more tedious than describing comparable real-life humans, applying the same description rules.
First of all, there's no skimping. There's no "it doesn't matter what the avatar looks like". It
always matters to someone what the avatar looks like. Again, exciting new world to explore and all. See above. And if the image is a portrait of the avatar, it matters
very much what the avatar looks like.
Next, there's much less that doesn't have to be explained because it's a given. For example, there's quite a bit of debate on whether or not to mention a real-life human's skin tone, and if so, how. The general consensus is usually to limit it to "light, medium light, medium, medium dark, dark". Nothing much more is necessary except maybe freckles or something. But a real-life human skin always looks like a real-life human skin. That's a given, isn't it
In a virtual world, it isn't a given. In case you were unaware: The skin of a human avatar in a virtual world can be a plain, solid tint on a 3-D mesh. It can be photo-realistic. It can be one or multiple actual photographs, e.g. because the whole avatar was generated from photographs and a 3-D laser scan of a real-life human. And it can be anything in-between.
Look up early promo pictures showing Mark Zuckerberg's personal Horizons avatar. Then search Flickr for Second Life pictures. There are
worlds between these two. You may be tempted that the Second Life pictures were generated by Stable Diffusion rather than rendered in a virtual world.
So if I just mentioned the skin tone, how would you, without looking at the image, be able to know what exactly the skin looks like, whether it's a cartoonish solid colour or a set of photo-realistic textures
This goes on and on.
In fact, when I describe an avatar, I also have to
explain the avatar. With that, I mean I have to explain the avatar system in Second Life and OpenSim and how it works. Unlike in many other virtual worlds, they aren't single-piece "monoblock avatars" la Ready Player Me that can only be varied in overall size, if at all. They are highly customisable in-world with no external tools, and they are highly modular. This explains why avatars looks like what they look like. And seriously, only experienced Second Life and/or OpenSim users even expect this level of in-world customisability and modularity to be possible in the first place.
If I were to go all the way, I'd even explain what mesh is, what rigged and fitted mesh is, how it works, how it differs from the "classic" means of customising Second Life and OpenSim avatars, what Bakes-on-Mesh is and what advantages Bakes-on-Mesh has over what was used before it was there.
In , I omitted the latter, and I also forgot to mention that the skin textures are near-photo-realistic, and the skirt texture is photo-realistic. Maybe I'll edit the image description preamble in the post and add both.
But what increased the effort of describing the avatar are the OSgrid logo and the logo for OSgrid's 17th birthday. Nobody who isn't an active OpenSim user knows what either looks like without seeing it. So I had to describe both in detail, also because the latter includes three independent lines of text which I had to transcribe.
This brings me to the last point: text in images and its transcription.
There is the rule that says that any and all text within the borders of an image must be transcribed 100% verbatim.
In the Fediverse, this rule is usually applied to screenshots, Mastodon screenshots, pictures of newspaper clippings or photographs of posters. This is usually manageable except when the text in a newspaper clipping is over 1,500 characters long.
The various alt-text guides out there that mention text transcripts are never about social media, what with how the puny character limits for alt-text in commercial social media makes transcribing text difficult to nigh-impossible.
They're always about static Web sites and blogs. This mostly means commercial Web sites, professional scientific Web sites, professional scientific news blogs or professional Websites or blogs about technology.
So some of the guides know the occasional edge-case. A graph or a flowchart is a near-edge-case due to its complexity. There are specific defined rules on how to describe graphs, although they may differ. One solution is to fully describe a flowchart in the main text. As for graphs, it's often considered okay to sweep the exact details under the rug unless every last figure in the graph absolutely matters. A complex table is very much an edge-case because a full transcript of it in alt-text would be too long. It is not as clear how exactly a table has to be handled, but it is being discussed.
Most importantly, both of these edge-cases are very well-known. They happen to professional Web designers and bloggers over and over and over again, and they have been happening for many, many years.
My images are edge-cases, too. But they're completely unknown edge-cases, and they're edge-cases in ways that are completely unknown to all those who write alt-text guides and who work on standardising accessibility.
I often have lots of bits of text in them in various places and ways. Signs, posters, box labels, vendor labels, teleporter labels, flags, neon signs, prints on clothing, the writing on an OpenSimWorld beacon, what-have-you. All this could be in my images. And a lot of it has already been in my images, sometimes in great numbers.
This alone is an edge-case. An edge-case which none of the alt-text guides handle because none of them are even aware of its very existence. The same situation could occur in real-life photographs, but real-life photographs practically always focus on one specific element or a few, and everything else is recommended to be treated as non-existent in alt-text. So part of my edge-case is that this focus on one specific text-less element is simply not there in most of my images.
I can solve it gracefully by
actually transcribing absolutely everything verbatim, 1:1, 100%. It usually doesn't go into the alt-text anyway unless it isn't too much. It definitely always goes into the long, full, detailed description in the post, in case you've forgotten that I usually describe each one of my images twice. Only if it's a maximum of three or four short bits of text, and I don't have to describe too much in the alt-text otherwise, only then the transcripts go into the alt-text as well. But I never put some transcripts into the alt-text and some not. Either all of them or none. Regardless, all of them always go into the full description in the post.
What makes my unhandled edge-case even more of an unhandled edge-case is that text often tends to be illegible in my images.
It may be too small to be readable at the resolution at which I post the image, e.g. two or three pixels high. It may be so small that it can't be identified as text at that resolution. It may be so tiny that it's practically invisible, e.g. less than half a pixel high. But still, it's text, and technically, it's within the borders of the image. Or it could be partially obscured, e.g. a poster with a lot of text of it, but 30% of the front area of the poster are covered by the trunk of a tree in front of it.
No alt-text or image description guide that deals with text transcripts
ever deals with unreadable text. They don't declare that it must not be transcribed. They don't declare that it needn't be transcribed. They don't declare explicitly that it must be transcribed.
The very existence of illegible text in an image completely eludes absolutely everyone who writes guidelines for image descriptions.
Given the lack of special treatment requirements of illegible text, I have to suppose that illegible text has to be handled the exact same ways as legible text. It has to be transcribed verbatim. And so I transcribe it verbatim.
Now you may say that that shouldn't even be possible. If that text is illegible, I can't read it either, so how should I be able to transcribe it
But I
can read it.
See, when I describe an image, I don't just simply look at the image at the resolution at which I intend to post it. I look at
the real deal in-world. I don't look at the representation of a sign in my image.
I look at the sign proper, in the virtual world, right where the sign is installed.
I can walk around with my avatar. I can move the camera independently from the avatar. I can move around obstacles such as tree trunks. I can close up on everything.
And all of a sudden, that tiny white blot of 4x3 pixels on which the writing can't even be made out as such unfolds before my eyes as a sign with a front texture of 1024x768 pixels with half a dozen clearly readable lines of text.
And all of a sudden, I can read that text behind that tree.
In fact, I'd say that even
fully sighted people can profit from how I transcribe everything, legible or not. If even the most eagle-eyed Fediverse users can't read something in one of my images, they still know what's written there because I can read it, and I tell them what's written there.
Lastly, all this is just me following existing guidelines and rules, from explaining jargon and unknown words to describing people to transcribing text to taking my audience into account and giving them what they require. It's just that my images are extremely obscure edge-cases for which no officially defined and agreed-upon exceptions exist and probably never will.
#
Long #
LongPost #
CWLong #
CWLongPost #
OpenSim #
OpenSimulator #
Metaverse #
VirtualWorlds #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Transcript #
Transcripts #
Transcription #
Inclusion #
Inclusivity #
A11y #
Accessibility #
AccessibilityMattersWalz has a long history with China. But hes notpro-China.
Vice President Harriss running mate taught in China in 1989, sparking an enduring interest in the country but also leading him to criticize its human rights abuses.
Find the latitdue and longitude of any place The more complex and/or obscure an image is, and the more description and explanation it requires for casuals to understand it, the harder it is to describe and explain it in a way that makes the image accessible to absolutely everyone. But exactly this should always be the goal: make your images accessible to absolutely everyone with no exception.
At some point, you will inevitably have to throw
someone in front of the bus to cater to someone else.
It's impossible to describe the kind of images I post in a way that's short enough for absolutely all blind or visually-impaired users to comfortably have the sole remaining image description read to them by their screen readers
and informative and explanatory enough for casuals and especially neurodiverse people to fully understand it. The former will require
one image description in the alt-text that's no longer than 200 characters plus
no additional image description in the post. The latter will require everything, absolutely everything to be explained and described from the absolute basics up to details like single elements in the image which means tens of thousands of characters, which in turn will have to go into the post. I guess you can see the contradiction.
If the image that you want to post shows something sufficiently niche and obscure, because at least one group of disabled people will inevitably have to suffer from at least one image description that's completely useless and totally inconvenient to them.
#
Long #
LongPost #
CWLong #
CWLongPost #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Inclusion #
Inclusivity #
A11y #
Accessibility #
AccessibilityMattersDanie van der MerweI get who your target audience may be, but the alt-text is really intended for the sight impaired audience, not for visually fine readers
Yes, but I didn't want to taint my track record of having described all my images since summer 2023 by not describing these two, regardless of whether or not descriptions were actually necessary for my specific target audience. It's bad enough that I didn't have the time to apply my own high standards for image descriptions to these two images.
That means any sight impaired user who follows you or follows a hashtag you choose to use, will in deed be seeing and trying to read your posts
My posts, yes, unless I limit who receives and therefore is permitted to read them.
Comments on Hubzilla should be expected to work differently from replies on Mastodon. They should only go to the author of the post at the beginning of the thread and be spread to all those who have a concept of conversations, and who have the post in their timelines/streams. In addition, they should go to those whom they mention and have them notified.
Curiously, at least when the post is fully public, my comments seem to go out to almost all my connections. The only exceptions are the Threadiverse unless I comment on something in the Threadiverse, those who filter out something in the comments and those on at least Hubzilla who haven't received the post and all comments in the branch before my comment, if any.
Unfortunately, on Hubzilla, whoever sends the post has the exclusive right to define the audience of the entire thread. I can't restrict the audience of a comment to a post to fewer actors than the audience of the post itself.
So in my case I try keep my alt-text descriptions shorter and more concise, specifically for screenreaders.
For my own images, I always feel that image descriptions in not significantly more than 200 characters wouldn't even start to actually describe anything. They'd stop at what the image is about. But they would barely, if at all, get to mentioning at least the most important elements that are in the image. And they certainly wouldn't get to describing what any of them look like. And I'm still not talking about explanations, which shouldn't go into alt-text anyway, and text transcripts.
I feel that even with the additional, full, long, detailed image description in the post as a "backup", such a short description in the alt-text would be so insufficient that they'd be half-useless and border on being ableist.
If you want to know what my image posts usually look like: with exactly 1,500 characters of alt-text, a bit over 1,400 of which are image description, plus over 60,000 characters of image description plus explanations plus text transcripts in the post. Unfortunately, my standards have advanced since then, and it is outdated now. But it's still probably the image post with the longest image description in the whole Fediverse.
I could also link to my most recent image post. It's more up-to-date, i.e. it avoids two mistakes I've made in the post that I've linked to. But it's pretty non-standard: It has two images which I mostly describe together in the post in what's essentially one very long common preamble of almost 11,000 characters, followed by individual descriptions for the images of a bit over 2,800 and a bit under 6,600 characters. This is because I was done describing the first image when I decided to add a second image which has a lot in common with the first one. The alt-texts have 1,500 and 1,499 characters respectively, but they use more characters to explain where the long descriptions can be found.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Mastodon #
Hubzilla #
Streams #
(streams) #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Inclusion #
Inclusivity #
A11y #
Accessibility #
AccessibilityMatters Ich dachte, den Mastodonten gehen die "Reply Guys" von Friendica jetzt schon auf den Zeiger, die ungefragt in ihre Threads reinplatzen und sie darber informieren, da "Mastodon" und "Fediverse" brigens nicht deckungsgleich sind, es auch noch
ganz andere Sachen im Fediverse gibt und so weiter und so fort.
Aber anders kann man ihnen das ja nicht erklren. Ich schtze den Eigenanteil von Mastodon-Kommunikation innerhalb von Mastodon auf ber 95%. Wenn du nicht auf Mastodon bist, hast du auf Mastodon kaum Reichweite.
Zumindest haben wir aktuell das Glck, da das Fediverse kaum wchst oder sogar schrumpft. Das heit, da knnte man endlich mal aufholen und wrde nicht gleich von der nchsten Fluchtwelle wieder abgehngt, die mal wieder praktisch ausschlielich direkt nach mastodon.social rollt.
Ich kriege davon ja nicht viel mit, weil ich von den meisten meiner Mastodon-Kontakte keine Posts reinlasse, hchstens Kommentare und DMs. Allenfalls knnte ich das gute alte Hashtagsuche-auf-mastodon.social-Spiel spielen. Ich knnte auch Hashtags als Feeds abonnieren, aber die mllen mir dann wieder den Stream zu.
CC:
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
NichtNurMastodonLe -mtrage tunisien L do lon vient (M el An) de Meryam figure sur la liste des quatre confirms dvoile ce jeudi 8 aot par la 39me dition du international du film ( ) de qui se droulera du 27 septembre au 4 octobre 2024 Namur ( ).
I have no other choice.
First of all, each one of the images contains three images in turn, all three of which must be described.
Furthermore, all text in an image must be transcribed verbatim. No exception. Each one of the images contains 18 individual pieces of text. The first image alone has over 600 characters to transcribe:
- Login
- You are logged in via OpenWebAuth - welcome!
- Jupiter Rowland
- jupiterrowlandhub.netzgemeinde.eu
- You are known by the following identifiers:
- UID:
- 2c22c620-d41b-4613-92c1-9f92d33c4bad
- The primary ID that this site has assigned to you.
- Fediverse ID:
- jupiterrowlandhub.netzgemeinde.eu
- Your unique ID across the Fediverse.
- Actor ID:
- https://hub.netzgemeinde.eu/channel/jupiterrowland
- Your identity on the OpenWebAuth network.
- Log out
- What is this site
- It's an experiment with Single Sign-On for the Fediverse and other "open social web" projects.
- For more information, see the discussion thread here.
The second image has the same fields, but more characters because some of the IDs are longer.
And this time, all text is actually readable. This is not another case of countless microscopically tiny pieces of text of which you wouldn't even know where they are in the image unless you paid close attention to the description.
The target audience of my post should not go far beyond:
- Fentiger, author of the start post, on Hubzilla
- Mike Macgirvin, who has commented in the thread, on (streams)
- Emanuel, who has commented in the thread, on Hubzilla
- Mario Vavti, who has liked the post, on Hubzilla
- Chris, who has repeated the post, on Hubzilla
These five will have my comment automatically delivered to them because they have received the post on their streams, and thus, they will be notified by all other interactions with the post and the comment thread. And I can guarantee you that all five are sighted.
Nobody on Mastodon will be automatically notified about my comment. Mastodon does not fetch replies to posts as unread content. And when I comment on a post, this comment is not delivered to all my followers. This is how Hubzilla works, in stark contrast to Mastodon.
This also means that the alt-texts are exclusively geared towards Hubzilla and (streams). They are too long for Mastodon, Misskey and their respective forks. Mastodon will truncate the first alt-text in the middle of a word ("...a dark and slightly bluish grey nec") because that's the 1,500-character mark.
Of course, I can take into account the millions of Mastodon users who might find the post or one of the comments somewhere, then open the whole thread, then click on the start post, then go through
actually the whole thread, provided all of it was forwarded to their Mastodon instance, and then discover my comment with two screenshots.
Have you ever encountered a single one of my image posts Especially in the past six months or so
Because if I took all these people into consideration as my target audience, I would have to take a similar effort as for my usual pictures which would result in
even more describing characters and
four separate image descriptions plus one preamble for two of them.
I would have to describe each image twice with a long, detailed, explanatory description in the post itself and with a shorter, probably purely visual description with no explanations and no text transcripts in the alt-text.
After all, if I included any possible Mastodon user into my target audience, I would have to take into account what they know and what they don't. And they know a whole lot less about these images than the five gentlemen who are my actual target audience right now. One out of four Mastodon users has never heard of Hubzilla, and my estimation is that every other Mastodon user thinks the Fediverse is only Mastodon.
I would start with the descriptions in the post. And I would start these with a common preamble for both descriptions.
I would explain what the site from which I'm posting screenshots is about.
I would explain that, no, the Fediverse is not only Mastodon because I can't assume it to be known.
I would explain what Hubzilla is because I can't assume it to be known.
I would explain what Friendica is  because I can't assume it to be known, and because knowing what Friendica is is required in order to understand the explanation of Hubzilla.
I would explain what a software fork is because I can't assume it to be known, and because knowing what a software fork is is required in order to understand both the explanations of Friendica and Hubzilla.
I would explain what the streams repository is because I can't assume it to be known.
I would explain what a software repository is because I can't assume it to be known, and because knowing what a software repository is is required in order to understand the explanation of the streams repository.
In fact, I would probably have to explain what source code is and what certain software licenses are because I can't assume either to be known, but it's necessary to understand software repositories and the streams repository.
I would explain what single sign-on is because I can't assume it to be known.
I would explain what OpenWebAuth is because I can't assume it to be known.
I would explain what Zap was because I can't assume it to be known, and because that's a requirement in order to understand the history of OpenWebAuth.
I would mention early on that both images contain two almost identical variants of my profile picture.
In order for people to understand my profile picture, I would explain what OpenSimulator is because I can't assume it to be known.
In order for people to understand OpenSimulator, I would explain what Second Life is because I can't assume it to be known.
In order for people to further understand the description of my profile picture, which isn't necessary in reality because I don't mention where exactly I have taken it, but which would be necessary then because someone might want to know where exactly the image is from, I have to explain what the Wolf Territories Grid is because I can't assume it to be known.
In order for people to understand the explanation of the Wolf Territories Grid, I would have to explain what grids are and why Second Life's world and the worlds based on OpenSim are referred to as "grids" because I can't assume it to be known.
In order for people to understand that, I would have to explain regions and sims in the Second Life and OpenSim context because I can't assume it to be known.
The preamble alone might be longer than my longest image description so far, and my longest image description so far is over 60,000 characters long.
Then I would describe the first image in the post.
Then I would describe the second image in the post, assuming the description of the first image to already be known.
Then I would write a separate, shorter, purely visual image description for the alt-text of the first image. I would not assume any of the other image descriptions to be known and write it as a stand-alone description. The alt-text would also mention that there is a longer, more detailed, explanatory image description with text transcripts in the post, that the users of Mastodon, Misskey etc. etc. can find it hidden behind the content warning which reads, "
<Insert full quote of the summary with all content warnings here>", and that the users of Pleroma, Akkoma, Friendica, Hubzilla and (streams) can find it in the post they are reading right now, right after the second image.
Then I would write a separate, shorter, purely visual image description for the alt-text of the second image. I would not assume any of the other image descriptions to be known and write it as a stand-alone description. The alt-text would also mention that there is a longer, more detailed, explanatory image description with text transcripts in the post, that the users of Mastodon, Misskey etc. etc. can find it hidden behind the content warning which reads, "
<Insert full quote of the summary with all content warnings here>", and that the users of Pleroma, Akkoma, Friendica, Hubzilla and (streams) can find it in the post they are reading right now, right after this image.
I wouldn't be done before Monday.
And I still wouldn't be able to satisfy everyone. Some would demand my alt-texts to be no longer than 200 characters each while still being sufficiently descriptive. Others would dislike the total lack of text transcripts in the alt-text, regardless of the text transcripts plus necessary image description elements would have inflated the alt-texts to lengths that Mastodon could impossibly hold. And then there would be those who are enraged because I have the audacity to put over 500 characters into one post, something that should be forbidden Fediverse-wide.
The only choice I have
not to throw anyone in front of the bus is by not posting these images at all.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Mastodon #
Hubzilla #
Streams #
(streams) #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
Inclusion #
Inclusivity #
A11y #
Accessibility #
AccessibilityMattersI'm keeping my word. I describe every single last image I publish anywhere. Even when I'm quite certain that the image in question will only be encountered by a small handful of users on Hubzilla and (streams), none of whom needs image descriptions. Just like this time: I'm describing two images in a comment on a Hubzilla post which will not be automatically forwarded to anyone on Mastodon. Mastodon users will only come across that comment if they check the whole thread.
The exception I'm making this time is that I'll only describe the image
once. In the alt-text. With no explanations. The target audience doesn't need to be explained what Hubzilla is, what (streams) is, what OpenWebAuth is, what single sign-on is, and they probably don't care enough about what OpenSim is to expect an explanation either. In fact, neither Hubzilla nor (streams) has alt-text and image descriptions as part of their culture.
So there will only be the image descriptions in the alt-texts and not a set of full, detailed image descriptions in the post plus one image description shortened to under 1,500 characters in the alt-text of each image.
Still, although the image is a screenshot and far from being the most complex screenshot I've ever described, the image description will be several thousand characters long. That's halfway okay because Hubzilla and (streams) have no character limits for alt-text either, apart from how much the Web interface can show at once on any given output device. But even if the Web interface can't show an alt-text to its full extent, a screen reader should still be able to read it all. And sighted Hubzilla and (streams) users can take a look at the source code of my comment.
Part of the length comes from describing my profile picture in each one of the screenshots, but there are also lots of pieces of text to describe and transcribe. To keep it "brief" this time, I won't mention and describe the typefaces, though.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Hubzilla #
Streams #
(streams) #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
CharacterLimit #
CharacterLimitsDolphins in Depth: Cause for concern on offense surfaces against Falcons
NorVeganJust to make sure I understand correctly, embedded inline images are not remeved all together, but rather moved to the bottom of the post ...but only 4 images, the rest is actually removed. Did I get that right
What actually happens is that Mastodon's "code sanitiser" strips everything out that Eugen Rochko doesn't deem old-school, original-gangsta, Twitter-like microblogging. Except a little text formatting since Mastodon 4. But this includes in-line images which Mastodon removes.
In order for images in posts to make it to Mastodon in
some way, Friendica, Hubzilla and (streams) make copies of inline images as file attachments because Mastodon does not strip file attachments away. File attachments are how Mastodon does images.
But even then, Mastodon only supports a maximum of four file attachments, even in incoming posts. Everything that goes beyond that is discarded entirely.
I've set my channel up in such a way that I have to confirm all new connections.
How do you do this
It's a default setting in Hubzilla. It can be found in the privacy settings. Or rather, the opposite can, namely that anyone can follow your channel just like so, without your explicit consent.
So if you manually confirm new connections, it'll appear to them like you follow them back, but if they're confirmed automatically it will not appear that way
It doesn't just appear so.
If someone tries to follow me, all that happens is that I receive a notification. But they don't actually follow me. They aren't permitted anything, not even to receive my posts.
In order for them to
actually follow me, I have to confirm their follow request. But in doing so, I automatically follow them back.
Hubzilla doesn't do that Twitter/Mastodon shtick of having followers on the one hand and followed on the other hand, and these are two different things, and if you follow one of your followers, congrats, you're mutuals.
Normally, Hubzilla works like Facebook. You're connected bidirectionally, mutually, or you aren't connected at all. Only that Facebook calls it "friend", and Hubzilla calls it "contact" or "connection".
Okay, but there's still that switch that I've mentioned above. It allows others to follow you without your consent. I've never tested this switch. None of my Hubzilla channels is experimental enough for me to try it. But thinking about it, I guess it should establish a bidirectional connection, too. Again, I don't know for sure.
This switch does not make much sense for a personal channel. It's much more useful for a public, anyone-can-join forum channel.
Now comes the extra trick that I've pulled, the thing that I mean with "I don't actually follow you back".
Most of my connections are Mastodon users who have absolutely nothing interesting to say. Not within the scope of my channel (primarily OpenSim, secondarily the Fediverse beyond Mastodon).
Some followed me because they took me for a Fediverse guru and nothing else because they had judged me by that one single post or comment they discovered from me. Others, especially from 2023 on, followed me because they found me on the federated timeline on mastodon.social, and they followed everyone whom they found there so they had their timeline abuzz with uninteresting cruft just like on Twitter.
If I
actually followed them back like you'd expect me to follow them back, they'd completely and utterly soak my stream with what I consider totally uninteresting cruft, completely drowning out the interesting posts, even more so if I even let their boosts through.
On the one hand, I want to let them follow me. On the other hand, I don't want them to drown me in all that off-topic rubbish.
I tried filtering when the channel was new. All of a sudden, my stream was nicely quiet. Problem, however: Hubzilla can't apply a per-connection whitelist filter including a keyword list only on posts and neither on comments nor on DMs. At least once, I haven't received a DM because the filter blocked it, because it didn't contain any of the keywords that were required for any content to come through.
So what I did was:
- I edited all my contact roles and explicitly allowed sending me posts in them.
- I made two new contact roles that don't allow sending me posts.
- I assigned these new contact roles to all contacts that I didn't expect to post anything interesting.
- Lastly, I edited my channel role and turned off the permission to send me posts.
Step 1 ensured that everyone who had one of my old channel roles assigned was still allowed to send me posts even after step 4. Step 2 and 3 made sure all those whom I wanted silenced were silenced when step 4 happened, and those were the majority of my connections.
Again, my stream was not nearly as busy as it would have been if everyone was permitted to send me their posts.
To reduce the cruft even further, I applied boost filters to those whose posts I allow, but who boost a lot of rubbish. And I also try to filter out specific uninteresting things from very busy connections which, at the same time, are unlikely to send me DMs by blacklisting certain keywords.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Hubzilla #
Filters #
Permissions This is close to what I already do.
In fact, when I have an idea for a picture, I go through the image description in my head, then I look at the scenery, then I go through the image description in my head again with the scenery before my eyes. And when I hit obstacles while trying to describe the scenery in my head, I decide against both creating the image and writing the image post in the first place.
This is why I usually try my best to avoid e.g. having buildings in my images, especially, but not only realistic buildings. Having to describe them is actually worse than having to describe space nebulae, and I have described space nebulae before. I avoid space pictures in my images now as well.
For me, it isn't about having some alt-text for an image because any alt-text is allegedly better than none. It's about having a pair of descriptions for each image that are as informative, accurate and thus useful as I can possibly write them.
#
Long #
LongPost #
AltText #
AltTextMeta #
CWAltTextMeta #
CWLong #
CWLongPost #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMetaOn the fairy back to civilization it became rainy and windy, so it became time to get the boat/ bus/ train/ train (change at Babylon) home to Manhattan.
to
Okay, das bedeutet, da (streams) seit der Einfhrung der nomadischen Identitt ber ActivityPub letzten Monat mehr Probleme hat, als bisher bekannt waren. Ich habe das mal gemeldet und hoffe, da darauf irgendjemand reagiert.
Ich habe nmlich weder eine Benachrichtung bekommen, noch bist du unter meinen Verbindungen als unvollstndig gelistet.
Hoffentlich hat Mike nicht gerade jetzt Urlaub. Und hoffentlich sieht das auch mein Admin.
#
Long #
LongPost #
CWLong #
CWLongPost #
LangerPost #
CWLangerPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Streams #
(streams)tldr: Why you should block me now
First things first, reasons why you should block me rather than following me:
- In general, I don't behave like I'm on Mastodon. That's because I'm not on Mastodon. I'm on something that's very much not Mastodon, and that's older than Mastodon.
- I regularly post over 500 characters at once. And I usually refuse to cut long posts into threads. I don't have to. My character limit is practically infinite. Also, people who hate threads are closer to me than people who demand threads.
- I use text formatting.
- I do other things that are impossible on Mastodon such as embedded links.
- I usually use a lot of hashtags. I do so to trigger other people's filters. Filter-generated CWs are the norm where I am.
- I summarise my posts in your CW field. That's because your CW field has always been my summary field, and that's since long before it was your CW field.
- I can't add Mastodon-style CWs to replies.
- My mentions use long names rather than the short names used by Mastodon. This may be disturbing. But I can't switch it off. It's hard-coded.
- I post a lot of Metaverse-related things. And I don't denounce "the Metaverse" as being dead or stillborn. I'm not a crypto shill, though the kind of metaverse I write about has been in operation since many years before the concept of blockchains was invented.
- My non-Metaverse posts are about the Fediverse. They don't make Mastodon look good. And they tend to be technical.
- My image descriptions in alt-text are always fairly long. And they're my short image descriptions.
My long image descriptions are in the post. And they're usually well over 10,000 characters long.
What this is, and what this is about
This is not someone's single, general, all-purpose, personal Mastodon account. This channel (not account) was created to specialise in the topics of virtual worlds in general and, more specifically, those based on
OpenSimulator. You can consider me not much more than an OpenSim avatar.
Since most of you have probably never heard of OpenSim, and you're wondering what I'm talking about: And
In a nutshell, OpenSimulator is a free, open-source and decentralised server-side re-implementation of #
SecondLife, created around Second Life's own viewer API after Linden Labs made the official Second Life viewer open-source. It was launched in January, 2007, and most OpenSim-based world, usually called grids, have been federated with one another since the introduction of the Hypergrid in 2008. One could say that it is to Second Life what Mastodon is to Twitter, what Pixelfed is to Instagram, and what Lemmy is to Reddit, only that the UI can be almost identical.
I've been in OpenSim since April 30th, 2020. By the way,
I'm not in Second Life, and I've never been there.
I occasionally post about the Fediverse with which I mean the Fediverse beyond Mastodon. That's when I have to say something that nobody else says.
. Sometimes it's easier to express something in one image macro than in 5,000 words. New meme posts shall go elsewhere in the Fediverse see below.
I don't post about real life. I may occasionally comment posts about real life, but I don't post about it.
This channel is not about real life.Where I am in the Fediverse
Those of you who come across my channel in their Web browsers in search of my profile (which is , by the way), will most likely see it right away. But those who see this post in their Mastodon timelines won't, although the text formatting should be a dead give-away. So it's only fair to mention it here:
I'm not on Mastodon. Yes, I'm someplace that's connected to Mastodon, but I'm not on Mastodon proper. So some of you might learn it from this post: The Fediverse is not only Mastodon.
Instead, I'm using a project named ). It has tons of features that Mastodon doesn't have, including some that are highly requested on Mastodon such as full-text search, quotes, quote-posts, text formatting , magic single sign-on and nomadic identity. It practically doesn't have any character limits at all.
Also, Hubzilla is older than Mastodon. It had its 1.0 release in December, 2015, more than half a year before Mastodon, and it was renamed from a project named the Red Matrix that was launched as early as 2012, about four years before Mastodon. For as long as Mastodon has existed, it has continuously been connected to Hubzilla. Oh, and by the way: Mastodon was the second Fediverse project to adopt ActivityPub in September, 2017. The first one, two months earlier, was Hubzilla.
Other channels
My little in-world sister Juno has her own Hubzilla channel. It's even more specialising in OpenSim from her point of view.
Juno RowlandIn addition, I have another channel on an instance of a nameless Fediverse server application that can be found in , a descendant of Hubzilla by Hubzilla's creator. I have launched that channel to be able to post images that may be sensitive in some way, e.g. that show faces which means eye contact. Hubzilla can't make Mastodon blank them out (streams) can. Again, this channel is in the Fediverse, and you can follow it from Mastodon and anywhere else in the Fediverse.
Jupiter Rowland's (streams) outletOn the same instance, I have a channel that specialises in posting self-made memes about the Fediverse, based on established and more or less well-known meme templates. This should be clear, but I'd like to mention it anyway: These memes don't suppose that the Fediverse is only Mastodon, nor do they treat Mastodon as the centre of the Fediverse.
fedimemesonstreamsstreams.elsmussols.net
Lastly, I have a blog about OpenSim in German on WriteFreely that's somewhat dormant currently, but I still have a lot to write and post about. is basically Medium in the Fediverse. Again, if you understand German, you can follow the blog from anywhere in the Fediverse. But you can't reply to my blog posts WriteFreely doesn't support comments.
What it means that I'm on Hubzilla
Next to my hashtags and mentions looking weird in comparison to what you're used to on Mastodon, the biggest "side-effect" of this is that my posts can grow truly
massive for Mastodon standards. Where Mastodon has a hard-coded limit of 500 characters, Hubzilla does not have any real character limit at all. It has never had one, and its predecessor, Friendica, has never had one either. Thus, character limits of any kind are not part of Hubzilla's culture which is very, very different from Mastodon's culture in many ways.
This means I don't do threads when I have to write a lot. I don't have to. I can put everything I want to post into one single post. Long posts are fortunately still something that Mastodon displays correctly even if you can't write them on most Mastodon instances. As far as I know, it's only above 100,000 characters that Mastodon rejects posts completely. And on Hubzilla, you can even post many times more characters than that.
This post, for example, is longer than 42 Mastodon toots, and as you can see, I didn't break it down into a thread of well over 50 single posts.
That is, if I really have to write something that's akin to a blog post with more than four embedded pictures, while I can do that as a regular post, I'll do it as a long-form article that doesn't federate and then link to it. I know that some of you mobile app users don't like your Web browser popping open, but trust me when I say it's the best solution, also due to what Mastodon does with embedded images which it can't display as such. Besides, I don't force you to tap that link to my newest article.
It's highly unlikely that I'll post anything with that many images, though, because describing each image would be extremely tedious, and the image descriptions would take up horrendous amounts of room in the post. I'll come back to that again further down.
How I handle images
Which takes us to images. It's here where I do acknowledge some of Mastodon's limitations, seeing as well over 90% of the recipients of my posts are on Mastodon, what with how many newbies indiscriminately follow everything they come across to get their personal timeline busy, and others following me with the belief that I'm a Fediverse guru first and foremost.
I no longer post more than four pictures at once in anything that federates into other people's or instances' timelines, streams or whatever. That's because Mastodon can't handle more than four file attachments, and Mastodon removes all embedded inline images from posts.
I still embed the pictures someplace in my posts that is not at the bottom. The bottom is for hashtags which I haven't already used in the text. Yes, I make a lot of use of hashtags for everyone's convenience, and I always write them in CamelCase when appropriate and/or necessary. As for the embedded pictures, sometimes I explain in my posts where which picture that you'll find at the bottom of the post should be where in the text, but I don't always do that.
How I handle alt-text and image descriptions
I'm very serious about image descriptions, so I've been describing all my images for over a year.
However, the topic I post pictures about, virtual worlds based on OpenSim, is very, very niche and very, very obscure. Probably only one in every over 200,000 Fediverse users has even heard of the general topic. This is not even close to common knowledge. So I have to assume that the vast majority of my audience needs it explained, needs
everything in my images explained extensively to understand it.
The topic of 3-D virtual worlds is a very visual topic, so it might not be interesting for blind or visually-impaired people. On the other hand, however, they may be excited about the discovery that the Metaverse is not dead, and that free, open-source, decentralised 3-D virtual worlds exist right now and have been since as long ago as 2007. Of course, they'll be curious, and they'll want to explore my images like someone would who can see them. To make that possible and satisfy their curiosity, I have to describe my images at extreme detail.
In fact, I often have to do so anyway when a picture doesn't focus on anything specific.
There is also the rule that any text within the borders of an image must be transcribed 100% verbatim. My images may contain a whole lot of bits of text. And this rule does not explicitly include or exclude text that is not fully readable for whatever reason. So I also transcribe text that can't be read in the image to be on the safe side. This means that my image descriptions may contain lots and lots of text transcripts.
My full, detailed, explanatory image descriptions always go into the post text body, right below the images themselves, and not into the alt-text. They do so for two reasons.
One reason is because they contain explanations and other things that may be useful for anyone, not only for blind or visually-impaired people. But not everyone can access alt-text. Some people can't handle a computer mouse steadily enough to make a mouse cursor hover above an image so that the alt-text shows up. Other people can't use pointing devices such as mice, trackballs, touchpads, trackpoints or the like at all. For example, they may use a pen in their mouth or a headpointer strapped to their forehead with which they can press the keys on the keyboard. These people can't access alt-text either.
For those who can't access alt-text, any information exclusively available in alt-text and nowhere else is completely inaccessible and lost. If it's in the post itself, however, they can access it.
The other reason is because my image descriptions are extremely long. If you as a Mastodon user think 800 characters are tremendously long, think again: My record for the description of one single image is . In words, over sixty-thousand. This is not a typo.
But Mastodon, Misskey and their respective forks such as Glitch, Hometown, Ecko, Firefish, Iceshrimp, Sharkey or Catodon have a hard limit of no more than 1,500 characters for alt-text. Unlike the character limit for posts, they enforce this limit on external content by truncating it and removing all characters beyond the 1,500-character mark. I can post alt-text with 60,000 characters, but Mastodon will chop 58,500 of them off and throw them away. And even Hubzilla's Web interface is limited in how much alt-text it can show at once because it can't scroll through alt-text.
Thus, my long image descriptions always go into the post itself.
Nonetheless, I always write another image description for the alt-text. I have to satisfy those on Mastodon who absolutely demand a useful image description in the alt-text, no matter what. They may not care for there already being an image description in the post. In fact, I always hide these posts behind content warnings (see below), so they don't even see at first glance that there's an image description in the post unless I mention that in the content warning. To keep them from lecturing or sanctioning me for not adding alt-text to an image, I describe all my images twice.
However, due to alt-text being limited in length, I can't make the description in the alt-text as informative as the one in the post. I never explain anything, and I often don't transcribe any text either if it's too much. But the alt-text always mentions the long description in the post, what it contains (explanations, transcripts) and where exactly to find it.
How I handle sensitive content and content warnings
First of all, Hubzilla is vastly different from Mastodon in this regard. Mastodon is a Twitter clone from 2016 that has introduced the use of the StatusNet summary field for content warnings in 2017. Hubzilla is from 2015, and it was created by renaming something from 2012 which, in turn, was a fork of a Facebook alternative from 2010. Hubzilla has never been developed against Mastodon, and it has never tried to mimic Mastodon. It was there long before there was Mastodon. And both its creator and its current maintainer don't want it to ape Mastodon.
This means two things.
One, the summary field which Mastodon has repurposed as a content warning field in 2017 is still a summary field on Hubzilla. It doesn't make sense to give a summary for 500 characters or fewer. But it
does make sense to be able to give a summary if you're theoretically able to post
millions of characters at once.
So Hubzilla doesn't have Mastodon's CW field, at least not labelled "CW". And Hubzilla's culture was fully fledged and developed when Mastodon was launched in 2016, more so when Mastodon introduced the CW field in 2017, and even much more so when Mastodon exploded with Twitter refugees in 2022.
Putting writer-side content warnings into the summary field (which, again, is labelled "CW" on Mastodon, but not on Hubzilla) is not part of Hubzilla's culture.
Still, I do add Mastodon-style content warnings where I deem them appropriate. Apart from the usual suspects, of which I know hundreds, I add them for:
- long posts (absolutely everything over 500 characters)
- Fediverse meta (whenever I post about the Fediverse)
- Fediverse beyond Mastodon meta (when I post about Mastodon as well as the non-Mastodon Fediverse)
- non-Mastodon Fediverse meta (when I only post about the non-Mastodon Fediverse, but not about Mastodon)
- hashtag meta (whenever I post about hashtags)
- alt-text meta (whenever I post about alt-text specifically)
- image description meta (whenever I post about image descriptions in general)
- content warning meta (whenever I post about content warnings)
Two, Hubzilla has its own way of handling content warnings. It is called "NSFW". That's basically a simple word filter which can optionally activated that automatically hides posts behind content warning buttons, depending on the keywords in its word list. The word list is customisable, so everyone can get all the content warnings they desire, given sensitive posts have the necessary keywords or hashtags, and nobody has content warnings forced upon them they don't need.
Hubzilla has had this feature since years before Mastodon introduced its CWs, and Hubzilla has inherited it from Friendica which has had it for even longer.
But in order for these filters to be triggered successfully, a post needs to have the appropriate keyword or keywords in it. This works best with hashtags. This means that I have to double all my Mastodon-style content warnings with matching hashtags. However, in many cases, there is not only exactly one hashtag for the same kind of sensitive content that is universally used by everyone, not even in filters. Thus, there are often multiple hashtags going with the same content warning.
In combination, this leads to masses of hashtags at the bottom of most of my posts as I add hashtags for almost all my content warnings. I know that some Mastodon users have a problem with more than four hashtags in one post, but warning people about sensitive content and triggering their filters to remove or hide said sensitive content is more important than coddling Mastodon users who still have Twitter on their brains.
As for sensitive images, I have recently stopped posting any kinds of images of which I'm certain they're sensitive or triggering to someone on this channel. It is for these images that I've created (streams) channels for (see way above). (streams) can make Mastodon blank sensitive images out. Hubzilla can't do that.
What it means when I follow you back
Most of the time, it means nothing. It means that I let you follow me. It does not necessarily mean that I actually follow you back.
This is due to a technical limitation on Hubzilla. I've set my channel up in such a way that I have to confirm all new connections. However, being a fork of a Facebook alternative, Hubzilla does not treat followers and followed as two separate things. Just like on Facebook, a connection is usually mutual by default. In practice, this means that when I confirm a new follower connection, I automatically "follow them back", i.e. create a mutual connection. This is hard-coded. I can't change it, not unless I let everyone follow me automatically without my consent.
But this does not mean that all your posts actually appear on my stream. If you don't write anything that's within the scope of this channel, I won't allow you to deliver your posts to my stream. Hubzilla has an absolutely staggering set of fine-grained permission controls that makes it possible for me to deny other users the permission to send me their posts.
If you write about OpenSim, I will allow your posts.
If you write about Second Life, I might allow your posts.
If you write about another virtual world that might be interesting for me, I might allow your posts.
If you write about the Fediverse, and you don't reduce the Fediverse to only Mastodon, I might allow your posts.
If you're an in-world acquaintance of mine who doesn't post about OpenSim, I very likely will allow your posts.
If none of this applies, I won't allow your posts. I'll let your comments on other posts through, I'll let your direct messages through, but I won't let your posts clutter my stream.
If I let your posts through, this doesn't necessarily mean I'll also let your boosts through. I can block boosts individually per connection. So unless your boosts are mostly interesting to me, I will block your boosts.
If Hubzilla should ever improve their filters, and I let your posts through, I may still apply a filter that only lets through what I want to read if you post a lot of stuff that I don't find interesting within the scope of this channel.
Finally
If you aren't okay with any of this, feel free to block me now before it's too late. I don't care how many people follow me or can read my posts as long as the right ones can. But I will not change the way I post to make it more like Mastodon, especially not if I can't because something is hard-coded.
Thanks for your patience.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
NotOnlyMastodon #
FediverseIsNotMastodon #
MastodonIsNotTheFediverse #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
CW #
CWs #
CWMeta #
ContentWarning #
ContentWarnings #
ContentWarningMeta #
Hashtag #
Hashtags #
HashtagMeta #
CWHashtagMeta #
Hubzilla #
Streams #
(streams) #
OpenSim #
OpenSimulator #
Metaverse #
VirtualWorlds #
IntroductionWhen researchers gave healthy mice antibodies from patients with , some of the animals began showing Long COVID symptoms
specifically heightened pain sensitivity and dizziness.
It is among the first studies to offer enticing evidence for the .
The research was led by , PhD, Sterling Professor of Immunobiology at Yale School of Medicine (YSM).
We believe this is a big step forward in trying to understand and provide treatment to patients with this subset of Long COVID, Iwasaki said.
Iwasaki zeroed in on autoimmunity in this study for several reasons.
First, Long COVIDs persistent nature suggested that a chronic triggering of the immune system might be at play.
Second, women between ages 30 and 50, who are most susceptible to autoimmune diseases, are also at a heightened risk for Long COVID.
Finally, some of Iwasakis previous research had detected heightened levels of antibodies in people infected with SARS-CoV-2.
From what I've read, colours are best described using basic colours and extra attributes describing brightness and saturation. Even if a colour is explicitly named in the image or otherwise officially named, it's best to use the name and then describe the colour.
Direct quote from my latest image description which I hope is acceptable: "The background behind him is a solid, slightly pale medium blue with a minimal hint of green."
#
Long #
LongPost #
CWLong #
CWLongPost #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta As long as neither the Fediverse as a whole nor Mastodon specifically is owned by a NASDAQ-listed Silicon Valley gigacorporation, it seems not worth publishing anything about it.
The novelty from late 2022 has worn off, too, and the typical Heise reader thinks Mastodon has vanished into thin air anyway, so why should any commercial news outlet be bothered to publish anything about it
Even if it has its own Mastodon instance
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFedi #
CWFediverseMeta I'm not a friend of image-describing AIs.
I regularly see them mess up descriptions of something as simple as a cat photo. When it comes to images that show something more obscure that requires more descriptive effort, more niche knowledge and more explanation, AI fails almost spectacularly.
I've compared AI descriptions with my own ones. No comparison, in fact. Those were the only times I've ever used an AI to have any of my images described.
As long as people have to use external AIs to describe their images, there's still an obstacle that makes the use of AIs to describe images somewhat inconvenient. As soon as the Mastodon Web interface gets a button for a one-click, fire-and-forget AI image description, nobody will write image descriptions themselves anymore. And people will still not check what rubbish the AI produces. As I've said, fire and forget.
As for Fediverse projects that aren't Mastodon, they won't integrate image-describing AIs anyway. Mastodon is the only Fediverse project that has image descriptions as an integral part of its culture.
What I want to see even less is any Fediverse project not only integrating an image-describing AI, but completely removing the possibility for users to describe their images themselves. I mean, I'm probably safe on Hubzilla and (streams), but I'm not the only one who prefers describing images themselves over having them described by an AI.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Fediverse #
Mastodon #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMeta #
AI I'd be very careful with involving the Lindens. For Linden Lab and Second Life, OpenSim is a direct competitor. A competitor that can become very dangerous even because it's actually unfair competition.
The Lindens won't play nicely with OpenSim if it starts growing too much, and they'll support OpenSim even less. They'll rather put rocks in OpenSim's path in some way, either out of their own will, or they'll be urged to do so by the three investors who own Linden Lab now. For these three, OpenSim siphoning a substantial amount of users off Second Life would mean less revenue for Second Life which, in turn, would mean less return of invest for them. So they'll put the proverbial guns on the Lindens' chests and tell them to stop this, no matter how, or they'll drop Second Life like it's hot.
Ill check out hubzilla!
Fair warning ahead: It's more than just a handful for a Mastodon newbie. It's basically the polar opposite of Mastodon, it works nothing like Mastodon, it handles nothing like Mastodon, and it's the most feature-rich Fediverse project of all.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Hubzilla #
SecondLife #
OpenSim #
OpenSimulator #
Metaverse #
VirtualWorlds Hardly. I can't really schedule anything.
Also, if you need someone to join in on your project, I'm generally not available for anything like this at all. I do my stuff, and I do things my way. Also, I'm already on two grids, quite high-ranking on one of them, and maintaining yet another avatar on yet another grid would be too much.
By the way, if we weren't connected, you'd be very very lucky that I've caught this post of yours.
Mastodon users are always notified by default when someone whom they don't follow mentions them unless the mentioning post doesn't make it to their home instance.
Hubzilla users, by default, aren't. And most
actually aren't. I'm a rare exception.
In general, if you want to talk directly to a Hubzilla user, don't just mention them out of the blue in a public post. They won't notice. Always send them a DM and hope they allow these. Same goes for Friendica and (streams) users.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
NotOnlyMastodon #
FediverseIsNotMastodon #
MastodonIsNotTheFediverse #
Friendica #
Hubzilla #
Streams #
(streams) #
OpenSim #
OpenSimulator #
Metaverse #
VirtualWorlds
. , .
: #44.912297
: #38.13998
#&
#12 - #500 / - & - #" "
for Bryan, , , , , , , , , , : National Weather Service: A FLASH WARNING is in effect for this area until 1:30 AM EDT. This is a dangerous and life-threatening situation. Do not attempt to travel unless you are fleeing an area subject to flooding or under an evacuation order. Source: NWS Charleston SC ** DO NOT RELY ON THIS FEED FOR LIFE SAFETY, SEEK OUT OFFICIA
for Bryan, , , , , , , , , , : National Weather Service: A FLASH WARNING is in effect for this area until 11:30 PM EDT. This is a dangerous and life-threatening situation. Do not attempt to travel unless you are fleeing an area subject to flooding or under an evacuation order. Source: NWS Charleston SC ** DO NOT RELY ON THIS FEED FOR LIFE SAFETY, SEEK OUT OFFICI
for Bryan, , , , , , , , , , : National Weather Service: A FLASH WARNING is in effect for this area until 11:30 PM EDT. This is a dangerous and life-threatening situation. Do not attempt to travel unless you are fleeing an area subject to flooding or under an evacuation order. Source: NWS Charleston SC , , , ,
for Bryan, , , , , , , , , , : Flash Flood Warning issued August 5 at 5:32PM EDT until August 5 at 11:30PM EDT by NWS Charleston SC Source: NWS Charleston SC , , , , , , , , , , ** DO NOT RELY ON THIS FEED FOR LIFE SAFETY, SEEK OUT OFFICIAL SOUR
If only this worked in 100% of all cases...
For me, it usually doesn't. Now, I'm not a professional Web developer or designer, not even an amateur, and I'm not a commercial blogger either. I'm just a Fediverse user who wants to make his image posts as accessible to as many people as possible.
But what my images show is such an extremely obscure niche that I had to find my own way of describing images. It includes describing each image
twice. One description is sufficiently detailed for even the most curious audience members, it's hopefully explanatory enough for everyone to understand the image with zero prior knowledge, and it contains all transcripts.
It goes into the post text body because that's the only place where explanations should go, for not everyone can access alt-text. Also, I've got more room in the post: Mastodon chops long alt-text from outside off at the 1,500-character mark, but I think it's only above 100,000 characters when Mastodon rejects overly long posts. And even that is only a tiny fraction of how many characters I could theoretically post at once.
The other description is for the alt-text. It's purely visual, it doesn't explain anything, and depending on how much I have to describe and transcribe, it may even lack transcripts. You can only do so much in 1,500 characters, and I often need more than 1,500 characters only to explain where I've made an image.
The alt-text also tells people where exactly the full, detailed description of the self-same image can be found, and that it contains explanations and transcripts.
So first of all, this means no explanation to "help the understanding" in the alt-text. Not everyone can access it there, 1,500 characters aren't nearly enough for explanations, and if I exceeded them in alt-text, it'd be useless for everyone on Mastodon, Glitch, Hometown, Ecko, Misskey, Firefish, Iceshrimp, Sharkey, Catodon and all the other Mastodon and Misskey forks because they'd get it truncated.
Besides, often enough, my images don't focus on anything specific. They may actually show an entire scenery as such. So no "important things". And the "most critical information" are the extensive explanations necessary to be able to understand
anything in the image.
But even if the images focuses on something specific, the nature of the topic may make people curious about the image, but these people know exactly zilch about it. If they were sighted, they wouldn't concentrate on that one thing in the image that's important. Instead, they would let their eyes wander around the completely unimportant scenery, discovering a whole new, previously completely unknown world.
However, they aren't necessarily sighted. So in order to let them experience my images in the same way as someone sighted, I have to describe the whole image with absolutely everything in it and explain everything in the image that may be unfamiliar because it doesn't exist in real life.
In fact, I don't describe my images themselves as I post them at the resolution at which I post them. Having to write that something is impossible to identify due to its size in combination with the image resolution feels like laziness and weaseling out, especially since I know what it is.
So instead, I describe what I see when I look at not the image at the image's resolution, but the real deal at near-infinite resolution. All of a sudden, I can transcribe microscopically tiny text. And a 2-by-2-pixel blot becomes a strawberry cocktail and reason enough to flag the image sensitive and slap an alcohol content warning on the whole post.
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMetasome of my three fav. Ashley Lust/ashleylust's on as
feat. hair ofc
' outfit on also some scenes in that
hair of -moment highlighted 'n let loose of in a includes side
1First Comics Announces Love Town Long Take Oneshot
1First Comics has been in the comics business for over forty years, publishing creator-owned titles that have constantly and consistently pushed the boundaries on the industry and the medium.  From our early days publishing titles such as Badger, Warp, American Flagg!, and GrimJack, 1First Comics has always sought to give...
Comics town take
Dolphins in Depth: After Tyreek Hills deal, whats next for the Dolphins
This would only work Fediverse-wide if the whole Fediverse was only Mastodon. Which it isn't and has never been.
Different Fediverse projects may have different ways of adding images to posts and alt-texts to images. Sometimes, they don't translate to Mastodon, so Mastodon users don't even notice.
The best examples would be Friendica, Hubzilla and (streams). Their preferred way of adding images to posts is the website/blog way: Upload the image to the file space integrated in your account/channel, then embed it in the post, inline, using markup code. However, Mastodon doesn't support inline images and "sanitises" them away, so these three have to convert the inline images into file attachments just for Mastodon.
If it was possible to add alt-text to someone else's pictures, Mastodon would not only have to send the alt-text back to the source, but Friendica/Hubzilla/(streams) would then have to remember where the particular image is embedded in the post (there's no limit for how many pictures you can embed in a Friendica/Hubzilla/(streams) post, by the way) and weave the alt-text into the markup code in the post.
This means that anyone on Mastodon would be granted permission to at least try and alter other people's posts, even if it's only indirectly. Not to mention that this would require modifications to Friendica, Hubzilla and (streams) as well, and how very complicated the whole process would be.
CC:
#
Long #
LongPost #
CWLong #
CWLongPost #
FediMeta #
FediverseMeta #
CWFediMeta #
CWFediverseMeta #
Mastodon #
Friendica #
Hubzilla #
Streams #
(streams) #
AltText #
AltTextMeta #
CWAltTextMeta #
ImageDescription #
ImageDescriptions #
ImageDescriptionMeta #
CWImageDescriptionMetaDolphins in Depth: Can Jaelan Phillips gain the clearance he needs to practice
Sen. (I-Vt.),
Chairman of the Senate Committee on Health, Education, Labor, and Pensions (HELP),
today introduced legislation to address the
that is affecting more than 22 million adults and 1 million children across the United States
and millions more around the globe.
The Long COVID Research Moonshot Act of 2024 provides $1 billion in mandatory funding per year for 10 years to the National Institutes of Health (NIH)
to support long COVID research,
the urgent pursuit of treatments,
and the expansion of care for patients across the country. 
Joining Sanders on the legislation are Sens. Tammy Duckworth (D-Ill.), Tim Kaine (D-Va.), Ed Markey (D-Mass.), Tina Smith (D-Minn.), and Peter Welch (D-Vt)
This legislation is endorsed by more than 45 organizations,
including:
Mount Sinai Health System,
Infectious Diseases Society of America,
National Partnership for Women and Families,
Solve M.E.,
Long COVID Alliance,
,
Body Politic,
Patient-Led Research Collaborative,
COVID-19 Longhauler Advocacy Project,
and Marked by COVID.