Find the latitude of any place.  

"As I say in the book,

"As I say in the book, Andreessens manifesto runs almost entirely on vibes, not logic. I think someone may have told him about the futurist manifesto at some point, and he just sort of liked the general vibe, which is why he paraphrases a part of it. Maybe he learned something about Marinetti and forgot it. Maybe he didnt care.

I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you dont want to hear. For many of these billionaires, the vibes of fascism, authoritarianism, and colonialism are attractive because theyre fundamentally about creating a fantasy of control."

Termism

55 minutes of blazing abstract, ambient stereo audio (Soundtrack)

An orchestrated tunnel of flutes, strings and synthetic brass blown out of this world (Soundrack)

Echoing strings from another universe (Soundtrack)

Serene voices from inside a damaged robot's shrieking processor (Soundtrack)

IndiGo charts bold Europe push: Amsterdam key to connect US, EU CEO says carrier offers global twist on India

IndiGo is stepping up its international game with CEO Pieter Elbers identifying Amsterdam as a critical hub to
-haulservices

C b km may mn nht lp v ci kt m lng

C b km may mn nht lp v ci kt m lng Your browser does not support HTML video. C b km may mn nht lp v ci kt m lng

Con Chut Mt Lng Mun Thnh Cng Nhng T May Mn Li Khng Ln No P2

Con Chut Mt Lng Mun Thnh Cng Nhng T May Mn Li Khng Ln No P2 Your browser does not support HTML video. Con Chut Mt Lng Mun Thnh Cng Nhng T May Mn Li Khng Ln No P2

Con Chut Mt Lng Mun Thnh Cng Nhng T May Mn Li Khng Ln No P3

Con Chut Mt Lng Mun Thnh Cng Nhng T May Mn Li Khng Ln No P3 Your browser does not support HTML video. Con Chut Mt Lng Mun Thnh Cng Nhng T May Mn Li Khng Ln No P3

Con Chut Mt Lng Mun Thnh Cng Nhng T May Mn Li Khng Ln No P4

Con Chut Mt Lng Mun Thnh Cng Nhng T May Mn Li Khng Ln No P4 Your browser does not support HTML video. Con Chut Mt Lng Mun Thnh Cng Nhng T May Mn Li Khng Ln No P4



<>

Add your location to a Google Map

San Jose Barracuda After Long Road, Cuda Captain Returns to NHL

Campground number three. Beautiful weather both days, a little chilly at night.

The Long Game: In a Rising Salary Cap World, Player Development is More Important Than Ever for the New Jersey Devils and All NHL Teams

Report: Hawks lose long time starter, last years key sixth man in free agency

vengng c th mn t Lng mi mi gi tng fan

vengng c th mn t Lng mi mi gi tng fan Knh youtube "Super mo" lun to ra NHng Video tha MN Cho ngi Xem V Cc Your browser does not support HTML video. vengng c th mn t Lng mi mi gi tng fan

For myself, I prefer being on team "there's no over-describing or over-explaining if someone somewhere out there may actually need that level of detail", rarely as I post images for this very reason. Also, I'm on team "someone somewhere out there may attack or sanction you for being ableist if you don't deliver all information necessary to understand your image post right away".
That said, I only explain everything once per post because my explanations generally go into the common preamble of my long descriptions or into the long description of whichever image shows something that needs explanation first, and the long descriptions go into the post right now.
Should I ever switch to long descriptions in external documents linked into the post, then each document will describe one image, and each document will contain all explanations because I can't count on everyone always opening the first document first. Likewise, if I have multiple images that contain the same key elements, I will describe them in each short description in alt-text because I can't count on sighted people always taking a look at the first alt-text first.
(In case you're confused: Yes, each one of my original images gets two descriptions, a short one in the alt-text, a long one in the post.)
# # # # # # # # # # #

Russia targets Ukraines cities with cluster munitions, raising deaths

Amid stalled peace talks, 's increasingly attacking with , banned for indiscriminate damage to

Civilian casualties in in 2025 rose 37% compared with same period last year, mostly due to s use of -range explosive weapons including munitions in densely populated cities

Long Island Nets to be a big part of young players development, says Sean Marks.

Friendica has had full support for formatted long-form articles since its inception 15 years ago. The same goes for all its surviving descendants, created by the same developer: Hubzilla from 2015, (streams) from 2021, Forte from 2024. In addition, Hubzilla can be used to post federating long-form articles (which are automatically sent to Fediverse connections and Atom feed subscribers) and optionally also to post non-federating long-form articles (which aren't sent anywhere).
Friendica has also been able to subscribe to both RSS and Atom feeds since its inception. The same goes for Hubzilla.
This is not new to the Fediverse at all.
See also my Mastodon vs Friendica, Hubzilla, (streams) and Forte feature comparison tables here: . (By the way: This is a non-federating Hubzilla article.)
CC:
# # # # # # # # # # # # # # # #

Jaylen Warren contract projection: Will the Pittsburgh Steelers give the RB a long-term deal

The Pittsburgh Steelers let Najee Harris go in free agency this past March. The former first-round pick
-page -steelers-commentary -steelers-free-agency -steelers-news

June swoon leaves Royals with long-shot playoff odds

Jaire Alexander named long-shot candidate to win Comeback Player of the Year

The Baltimore Ravens added cornerback Jaire Alexander to be a mainstay of their defense in their surge for
-ravens-news -page

The long and winding draft rights of Georgios Printezis

OpenSim user for five years here.
Little nitpick: OSgrid is not "the OpenSim grid". It is not even "the official OpenSim grid". There is no such thing as an official OpenSim grid, only 4,000+ big and small grids that are independent from both each other and OpenSim's development.
In fact, OpenSim, that's five spare-time hobbyists, a wiki, a code repository and a bug tracker. Nothing more. Everything else is third-party.
OSgrid was the first public grid. It's the oldest grid, 18 years next month. And it's probably the only major grid that runs vanilla code so that the OpenSim devs can debug their development code under regular conditions. AFAIK, the other big grids all run forks. In fact, OSgrid isn't even the biggest grid anymore. While it's bigger than Second Life, the Wolf Territories Grid is bigger than both.
If you want to know more, your blog post was the perfect opportunity for me to finally write that I had planned for a while now.
# # # # # # # # #
Oh does this feel like my inner monologue when I post a photo. It is a bummer that it, at times, prevents me from posting more photos, but I hope this make me a little more quality over quantity.

I actually keep entire categories of things out of my images because I can't describe them up to my own standards. This includes realistic buildings. I would first have to look up loads of architectural terms to describe all details of a building, and then I would have to explain each and every one of these architectural buildings in a way that absolute laypeople understand my image description without ever having to ask me anything or look anything up themselves.
The last time I posted an image with a building was . I actually went around and looked for a nice motive for a new image post for quite a while. There was one harbour scene which I thought looked spectacular enough to show, but which was impossible to describe. So I fell back to this motive. I thought it's not too bland, not too simple and at the same time not too complex. Besides, the one building in the image is totally unrealistic and without all the tiny details that would make up a realistic building.
And then I ended up taking some 30 hours over two days to describe the image in over 60,000 characters. The building alone took up some 40,000 or so. This is still the longest image description in the whole Fediverse, I think.
My last image post before that was with still over 25,000 characters of description for one image, and I consider it outdated slop.
It was the last time that I described an image in my image with more details than visible in the original of that image itself. And that's where I got sloppy. I completely forgot to transcribe what's written on the license plate above the office door of the motel in that image in my image. And I couldn't be bothered to give detailed descriptions of the two 1957 Chevy Bel Airs parked in front of the motel because I really wanted to get that description done. In the actual image, all of this is sub-pixel-sized. You wouldn't know it's even there if I didn't mention it. I did describe the motel, but it's a fairly simple building, and I decided against describing what's visible through the windows with open blinds from the camera angle in the image in my image.
In the next image, the one with 60,000+ characters of description, I stopped describing images in the image beyond what I can see in the place where the image itself was taken. That was because one image is a destination preview image on a teleporter. The destination is a kind of teleport hub. The preview actually (if only barely so) shows over 300 single-destination teleporters, a few dozen of them with their own preview images.
So I teleported to that hub to describe it in detail. And I looked at the teleporters and their preview images. Turned out, not only do these preview images pretty much all have text in them and not necessarily few bits of text, but some of them actually have images within themselves again.
I would have had to describe that image in my image, dozens of images in that image in my image and a number of images in these images in that image in my image. For each of the latter, I would have had to teleport three times from the place that I originally wanted to describe. I would also have had a whole lot more text to transcribe. All on a sub-pixel scale several times over.
Not only would that have been a humongous task, but more importantly, it would have inflated my image description and my whole post to more than 100,000 characters. Mastodon would probably have rejected my post for being too long. And this would have rendered the whole effort futile. In the few places in the Fediverse that would still have accepted my post, nobody cares for image descriptions.
AI certainly can't get inside my brain well enough to write accurate descriptions. Even if it could would I hmmm.

I've only used AI to describe images twice. And in both cases, that was to show just how bad AI is at describing images about an extremely obscure and quickly changing niche topic at the level of accuracy and detail which I deem necessary for that topic.
I guess one problem that you're facing is that next to nobody in the Fediverse can even grasp what you're thinking about, what you're taking into consideration for your image descriptions. That's why you got next to no feedback upon your first comment in this thread.
I have one advantage here: What you're pondering, I have actually done. If I feel like people won't understand what I'm thinking about, I point them at one or several of my actual image posts, and/or I post a quote from one of my actual image descriptions. Still, almost nobody actually goes and reads through any of my image descriptions, but I guess they get the gist, especially when I post snippets from my actual descriptions.
CC:
# # # # # # # # # # # Just for me to be on the safe side: What are your minimum requirements for alt-texts and image descriptions so you refrain from sanctioning a user
Full, to-the-point adherence to , the , etc., even though they contradict each other
Do you demand image descriptions be detailed and informative enough so that nobody will ever have to ask the poster about explanations and/or details because they're all already in the descriptions, no matter how niche and obscure the content of the image is
If there is already a lengthy image description in the post itself (imagine all character limits you know in the Fediverse it's longer than all of them by magnitudes), do you still demand there be another description in the alt-text, even though the alt-text actually points the user to the description in the post, because there absolutely must be a sufficiently detailed and accurate image description in the alt-text, full stop
In fact, do you sanction image descriptions in general or alt-texts in particular if you think they are too long For example, if you stumble upon an image post from me that has a "short" image description of 1,400 characters in the alt-text and a "long" image description of over 60,000 characters in the post itself (and I've actually posted such a thing into the Fediverse ), will you demand I discard two days and some 30 hours of work, delete the long description and cut the short description down to no more than 200 characters Maybe even while still retaining the same amount of information Lest you have me dogpiled and mass-blocked or worse
By the way, , and .
# # # # # # # # # # # # # # # A few notes:
First of all, your link is behind a paywall.
Next, I hope you don't honestly think that only two virtual worlds systems were ever made, Second Life and "The Metaverse" which many think is the official and trademarked name of Meta Horizon (very early on Facebook Horizon, a.k.a. "Meta Metaverse", "Facebook Metaverse" or "Zuckerberg's Metaverse").
On a sidenote: The community has been using the term "metaverse" for actually existing virtual worlds in its standard jargon since 2007, the year that OpenSim was first released. 14 years longer than Zuckerberg. In particular, , the first public OpenSim grid, the oldest OpenSim grid and until fairly recently the biggest one, has officially labelled itself "The Open Source Metaverse" since 2007. Also, there are things like the which resides in , another OpenSim grid, the which focuses on and is based in OpenSim, too, and entire grids like the . All of them predate both Zuckerberg's 2021 Metaverse announcement and the 2022/2023 Metaverse hype.
While the big Second Life hype was from 2007 to late 2008, it hasn't entirely faded into obscurity. And academics and psychologists aren't that big a part of its user base. Many Second Life users have stuck around since the late 2000s or 2010s, and most are hobbyists.
If you want an insight into Second Life without actually spending lots of time there, check Wagner James "Hamlet" Au's blog . He also covers other virtual worlds from VRChat to Horizon.
As I can't read your blog post, I do hope you've written it from the perspective of someone who actually knows a few things about Second Life, such as that it's constantly evolving, and that it's the one virtual world that's the closest in looks to Cyberpunk 2077, now that physically-based rendering is spreading. And not from the perspective of someone who only very recently discovered that Second Life was, surprisingly, not shut down in late 2008 or early 2009, and who thinks it still looks the same as back then.
# # # # # # # # # # # # # # # # #
You say this like it's a bad thing.

Not at all.
One advantage is, as you've said, that the backend and the Web frontend can have their own developers, development of both can largely be detached, and they can be upgraded separately from one another.
Separate Web frontends can be developed by people who actually know a thing or two about frontend development and UI design. I mean, look at the Web UIs of some all-in-one Fediverse server applications. They're often the digital counterpart of random knobs and switches poked through a piece of cardboard and labelled with a ball pen, just so that these knobs and switches are there. Sometimes they're the equivalent of expecting all kinds of end users to operate DIP switches, but hey, they're still better than soldering and unsoldering wires.
Another advantage is that server software for which alternative frontends exist does not have to drag its default frontend around. There are Mastodon servers with alternative frontends, but they still have to have the two official Web UIs installed (the default one and the Tweetdeck-style one) because they're firmly welded to the backend. I guess we all know what a heavyweight Mastodon is, and I'm certain that part of the weight is caused by the built-in Web UIs. In stark contrast, you can set up an Akkoma server with Mangane instead of Akkoma-FE, as in without having to also install Akkoma-FE.
By the way, Hubzilla is an interesting case here. Not only is its default UI very configurable, but Hubzilla itself is highly themeable, and third-party themes almost amount to entirely new UIs. At the same time, however, practically all official development efforts went only into the backend for most of its existence.
Any Hubzilla UI has to wrestle an immense wealth of features, and not exactly new features were added over time. This, however, caused Hubzilla's UI to gradually turn into a jumbled mess because some of the new UI elements were seemingly added in totally random places. Not only was the UI never cleaned up, but the default theme is perpetually stuck in 2012 (the name "Redbasic" says it all, it was made for Hubzilla when Hubzilla was still named Red), it was derived from an early Friendica theme, and even Friendica wasn't pretty back then. Also, the documentation was completely neglected.
So the situation last year was that there was only one working Hubzilla theme left, and that was Redbasic. It was the only theme that was even only upgraded to work with newer Hubzilla versions. There used to be other official themes, but they eventually ended up so outdated that they were removed altogether. 's third-party themes were last touched seven years ago, that must have been around the time when Hubzilla 3 came out. At the same time, the official documentation was not only highly incomplete, but it was so outdated that parts of it were simply false. It partly referred to features that had been axed many years ago (tech levels) and features that simply were never there (four different mention styles), and parts of it even still spoke of Red. Thus, nobody even knew how to develop new themes for current Hubzilla.
That was when the community stepped in. sat down and . not only started working on his , but in the same process, he reverse-engineered Hubzilla's theming system to write documentation for theming Hubzilla which had never been written before AFAIK. Around that time, was dabbling with specialised themes for certain purposes, e.g. one very clean theme for Hubzilla channels used as long-form blogs. Later on, joined the fray with his now-popular .
Granted, Hubzilla still carries Redbasic around, not only as the default for new channels unless the admin chooses another one, but also as a fallback in case a new Hubzilla version doesn't support existing third-party themes anymore. The latter is becoming less likely as the Utsukta themes are being built against Hubzilla's development versions now. Besides, it's in Hubzilla's nature that everything on a hub is updated at the same time, including third-party repositories.
In general, the Hubzilla community is no longer that easily satisfied with a UI that "just works", and the devs have taken notice. Hubzilla 10.4, now a release candidate, will spruce up certain core parts of the UI. It will introduce a tree-style thread view as the new default instead of its current chronological view, something that Friendica, (streams) and Forte have had for significantly longer. That is, this is actually a side-effect of the introduction of "lazy loading" conversations to reduce the server workload. Also, upon user request, it will add a button to add images to comments.
If (streams) and Forte grow bigger, the same could happen there. They have two official themes to choose from, fairly new Fresh and an older version of Redbasic. However, they don't have a large enough community for all the same things to happen to them that happened to Hubzilla, although Pepe has said he'd rewrite the (streams) and Forte help as well, seeing as Mike had ripped them out entirely with no replacements as they were too outdated at that point. Maybe someone will even write a guide on how to adapt Hubzilla themes to (streams) and Forte.
That is, (streams) and Forte are both already the result of several years of UI and UX advancement and improvements and making them fit for a Mastodon-dominated Fediverse (where Hubzilla is still geared towards a Fediverse which it will dominate itself by the mid-to-late 2010s). This is stuff which can't be taken care of in themes because it concerns the UI engine itself, and it's partly tied deeply into the backend.
While Hubzilla, (streams) and Forte won't be able to do without their official themes anytime soon, the official themes don't significantly weigh them down. Still, they require some maintenance work to keep up with the backend.
Wouldn't Mastodon would be better if it specialised in developing apps, and outsourced the server side to people who know how to do back-end engineering

This makes me wonder which half Mastodon would be willing to outsource. I think they'd rather hold on to the backend and pass all the frontends on. Of course, this would come with the advantage of the official Mastodon mobile app actually becoming somewhat decent rather than remaining the "we need an official app, no matter how" kluge that it is today.
CC:
# # # # # # # # # # # # # # # # # # #

The J. Arthur Keenes Band Overcoat Weather

CC BY-NC-ND ( Attribution Non Commercial No Derivatives)

On a walk through the forest at dusk, I came across this adorable little bug.
It belongs to the group of long-horned beetles, also known as cerambycids (family Cerambycidae). They are a diverse group of beetles characterized by their long antennae, often longer than their bodies, hence their name.
-horned Beetles It's similar for me, only that I always put a gigantic effort into describing my own images twice, once not exactly briefly in the alt-text and once with even more details in the post itself. Sometimes I find an interesting motive, but when I start thinking about how to describe it, I don't even render an image because it isn't worth doing so if I can't post it.
I haven't posted a new image in almost a year. In fact, I've got a series of fairly simple images for which I've started writing the descriptions late last year, and I'm still not done. So much about "it only takes a few seconds".
Before someone suggests I could use Altbot: I'm not even sure if it'll work with Hubzilla posts. And besides, no AI on this planet is fit for the task of properly, appropriately and accurately describing the kind of images that I post.
And then there's me who has managed to describe one image in a bit over ten thousand words last year. Good thing I have a post character limit of over 16.7 million. And I actually limited myself this time: I did not describe images within my image in detail, in stark contrast to about two years ago when I described a barely visible image in an image in well over 4,000 characters of its own, and that wasn't the only image within that image that I described.
CC:
# # # # # # # # # # # So Pleroma and Akkoma (which, for some reason, is missing from the list) actually use the ActivityPub C2S API to connect their frontends Even though Pleroma predates ActivityPub and started out as an alternative GNU social frontend, much like Mastodon
I mean, they're famous for having separate repositories for the server and the Web frontend (same name with "-FE" attached). And they're equally famous for having servers that forgo the official frontend in favour of third-party stuff, most notably Mangane.
So if Mangane actually makes use of that API rather than a homebrew *oma client API, it could be used as or, if need be, modified into a sparrings partner for API-testing purposes, not to mention that it's living proof that the API actually works. As it integrates with Pleroma and Akkoma that well, I've got my doubts that it only uses the Mastodon client API.
In the cases of (streams) and Forte which are almost the same software save for protocol support, the Web UI is much closer to the server backend, as flexible and modifyable it is. In their cases, the question would be whether they could be used to test just how far feature support in the ActivityPub C2S API can possibly go, maybe even whether it'd be possible to use the ActivityPub C2S API to build an almost fully-featured (streams)/Forte client app (except, of course, Web UI configuration and (streams)' per-channel ActivityPub switch which might cut the whole app off the server).
CC:
# # # # # # # # # # # # # # # # # # # # First of all: You must never put line breaks into alt-text. Ever. (, )
Besides, that will certainly not be the day that I'll post my first image after more than a year.
It's tedious enough to properly describe my original images at the necessary level of detail, and one image takes me many hours to describe, sometimes up to two full days, morning to evening. Not joking here. I certainly won't put extra effort into turning at least the 900 characters of "short" description that go into the alt-text into a poem. And I definitely will not also turn the additional 20,000, 40,000, 60,000 characters of long description that go into the post into a poem as well. (And yes, I can post 60,000+ characters in one go, and I have done so in the past. My character limit is 16,777,215.)
# # # # # # # # # # #
Sachen erst einmal zusammenfassend im Gesamten zu beschreiben, fiele mir persnlich dazu ein, damit Menschen ins Boot geholt sind. Wenn sich zu Details geuert wird oder Details erfragt werden, knnte sich das dann ggf weiter entfalten.

In deinem vorigen Kommentar hast du selbst geschrieben, und ich zitiere wrtlich:
Inhalte erfragen-Mssen steht im Widerspruch zur Teilhabe:

Das heit fr mich: Die Leute sollen gar nicht erst fragen mssen. Die sollen gar nichts fragen mssen. Alles, was sie an Informationen brauchen, egal, wieviele und welche das sind, mu sofort im Bildpost zusammen mit dem Bild mitgeliefert werden.
Da wir hier allerdings gerade anscheinend nur zu zweit sprechen und auch ich nicht blind bin: Vielleicht das Problem auch in der eigenen Timeline mal zum Thema machen, explizit Blinde um ihre Gedanken dazu bitten und # fedihelp dazustellen

Das habe ich wirklich schon oft genug versucht siehe . Und oft genug poste ich auch noch in zwei Guppe Groups.
Erfolg: minimal. Auf die meisten Posts gab es absolut berhaupt keine Reaktionen. Manchmal gab es einen Like oder vielleicht zwei. Selbst wiederholt/geboostet wurde beinahe keiner davon jemals. Antworten kamen oft genug von Leuten, die gar nicht verstanden, was ich eigentlich wollte, so etwa von Hubzilla, wo die Idee der Barrierefreiheit und Inklusion bis heute nicht angekommen ist und Alt-Texte teilweise als "noch so'n dummer Mastodon-Tick" wahrgenommen werden, weit berwiegend aber genau wie die Mastodon-Kultur allgemein komplett unbekannt sind.
Ich hatte Abstimmungen mit weniger Teilnehmern als Optionen.
Das einzige Mal, wo wirklich mal mehrere tatschlich Blinde auf so einen Post geantwortet haben, war . Und das auch nur, weil ich tatschlich einen ganzen Haufen blinder Aktivisten im Fediverse persnlich erwhnt habe.
Die Hrden sind vielfltig.

Letztlich mte ich auch nicht nur Blinde und Sehbehinderte ins Boot holen, sondern auch die Mastodon HOA und deren Alt-Text-Polizei. Der geht es ja lngst nicht mehr darum, da berhaupt eine Bildbeschreibung da ist, sondern die schraubt kontinuierlich die Mindestkriterien fr akzeptable Bildbeschreibung nach oben. Und da will ich auch wissen, ob ich deren Segen habe, denn die entscheidet letzten Endes darber, wieviel Reichweite ich haben darf.
Und dann wird sich als Problem rausstellen, da ich der einzige in der ganzen Diskussion bin, der nicht auf Mastodon ist, und somit der einzige, der den ganzen Thread einsehen kann. Die anderen sehen voneinander die Kommentare nicht, und so gehen sie aufeinander nicht ein und diskutieren nicht miteinander, sondern nur mit mir. Sie werden gegenstzliche Vorstellungen haben, aber nichts auch nur davon ahnen, geschweige denn bemerken. Das wird dann also keine konstruktive Diskussion sein.
# # # # # # # # # # # # # # # # # Still, the headbutting was often justified for Mike. Unless, of course, you say that Mastodon is and has always been the one and only Fediverse gold standard and the one and only ActivityPub reference implementation.
I'll give you an example: In July, 2017, Mike's Hubzilla was the very first Fediverse server software to implement ActivityPub. Mike played strictly by the rules. As Hubzilla has a character limit of over 16.7 million and supports text formatting on the same level as the best long-form blogging platforms out there, he declared Hubzilla long-form and made Hubzilla send Article-type objects. Just as the spec demands.
In September, Mastodon became the second Fediverse server software to implement ActivityPub. But Gargron did not play by the rules. He only implemented a tiny subset of the protocol, namely what suited him. And he also broke it: Mastodon could display Article-type objects at their full length. But Gargron staunchly refused to implement any support for anything that goes beyond plain text. The ActivityPub spec explicitly says that Article-type objects are formatted. But Gargron wanted Mastodon to be a purist, minimalist, old-school, original-gangsta, Twitter-cloning microblogging platform. And stuff like bold type, italics, headlines, embedded in-line images or titles aren't purist, minimalist, old-school, original-gangsta, Twitter-cloning microblogging.
And so Mastodon took fully formatted, long-form-blog-style posts from Hubzilla and ripped everything out that wasn't plain text. It basically defaced Hubzilla posts. That is, it had been defacing Friendica and Hubzilla posts all the same ever since it was launched. But this time, there was a spec that actually defined what Mastodon was doing as wrong. And that spec had been finalised and pronounced a W3C standard meanwhile.
So Mike asked Gargron to please follow the official ActivityPub spec and make Mastodon support full HTML rendering for Article-type objects.
Gargron refused. Old-skool microblogging is plain text and only plain text, full stop.
This went back and forth. Eventually, Gargron presented a "solution": Mastodon now "renders" Article-type objects by showing the title and, right below, a link to the original. That is, basically not at all anymore. Of course, this meant that the vast majority of Mastodon users no longer read what came from Friendica and Hubzilla because they couldn't be bothered to open that link.
Mike saw this as a direct assault against Friendica and Hubzilla and an attempt at excluding both from "the Fediverse" which was almost entirely Mastodon at that point already. So he himself had to break the spec and make Hubzilla send Note-type objects instead so that Mastodon renders them at all. It still defaces them to this day.
(Friendica's solution was to send an Article-type object when a post has a title and a Note-type object when it doesn't have a title. Optionally, it can always send Note-type objects.)
By the way: This very same head-butting has returned. Not between Gargron and Mike, though, but between Gargron and much bigger players. Platforms like Flipboard and Ghost have introduced ActivityPub, and they send Article-type objects just as the ActivityPub spec demands. The same goes for WordPress. And, of course, they don't send plain-text "long tweets". They send fully formatted news articles and blog posts.
And now they demand Mastodon, as the biggest player in the Fediverse by user count, make their Article-type objects look just like they look at the source. They demand Mastodon not only render bold type, italics, headlines and the rest of the subset of text formatting that was introduced with Mastodon 4 in October, 2022. They also demand Mastodon show the titles and, most importantly, leave the images embedded within the articles in place, no matter how many they are.
This is no longer Gargron and his devs vs a guy in the Australian outback. This is Gargron and his devs who try hard to bend the Fediverse to their will and assume supreme control over it vs the Ghost Foundation, Flipboard, Inc. and Automattic, Inc. that play strictly by the ActivityPub rules. And I dare say that Automattic, Inc. alone has more money and more market power than Mastodon gGmbH and Mastodon, Inc. combined.
Mastodon has always gotten away with ignoring and breaking standards, re-inventing wheels and implying towards its religious followers that the whole Fediverse was built upon Mastodon and around Mastodon, and that everything that does things differently from Mastodon is inherently a broken add-on to Mastodon or an evil intruder. This time, they won't. And I guess they've actually taken it into consideration.
CC:
# # # # # # # # # # # # # # # #

Ln s lOng c th mn kim tin triu nh gp hi snn

Ln s lOng c th mn kim tin triu nh gp hi sn n Em m Chc C C Ch Anh Ch, Nho MT Your browser does not support HTML video. Ln s lOng c th mn kim tin triu nh gp hi sn n

Alles zu transkribieren ist nicht mglich und wohl auch Unsinn.

Mglich ist es durchaus, und ich mache es stndig. Das glaubt mir zwar keiner, und niemand kann sich vorstellen, wie ich das mache, aber ich mache es.
Auch Sehende filtern, erfassen wesentliche Details.

Darauf kann ich mich bei meinen Bildern nicht verlassen.
Wie gesagt: Fr die allermeisten Menschen da drauen ist das ein ganz neues unbekanntes Universum. Dazu kommen zwei Erkenntnisse. Zum einen: "Das Metaverse" ist gar nicht tot, wenn ich Bilder aus virtuellen 3-D-Welten zeigen kann. Zum anderen, wobei das nur Sehende betrifft: Was ich da zeige, sieht tausendmal besser aus als alles, was Zuckerberg so geteaset hat. Womglich htte derjenige nie damit gerechnet, da "das Metaverse" so aussehen kann.
Ich mu immer damit rechnen, da genau deshalb irgendjemand da drauen sich vor Aufregung und Neugierde kaum halten kann. Guckt der sich dann nur das an, was im Postkontext wichtig ist Mitnichten! Der nimmt all die groen und kleinen Details in Augenschein. Relevant oder nicht, das ist alles aufregend.
So, und dieselbe Chance, genau das zu tun, also all die groen und kleinen Details zu erfassen, egal, ob sie relevant sind oder nicht, mssen auch Blinde und Sehbehinderte haben. So definiert sich Barrierefreiheit, so definiert sich Inklusion.
Inhalte erfragen-Mssen steht im Widerspruch zur Teilhabe: Es ist normal, durch die Timeline zu scrollen, um mitzubekommen, was luft, ohne sich gleich an Diskussionen beteiligen zu mssen.

Und genau deswegen serviere ich alle Erklrungen und Beschreibungen, von denen ich glaube, da irgendjemand da drauen sie haben will, direktweg im Bildpost auf einem Silbertablett. Egal, wie weit ich ausholen mu.
Gut, das fhrt auch schon mal dazu, da ich mehr als 1000 Zeichen alleine brauche, um zu erklren, wo ein Bild gemacht wurde, weil ich sogar Erklrungen und Erklrungen von Erklrungen erklren darf.
Seit Ende letzten Jahres (!) arbeite ich immer mal wieder an den Beschreibungen fr eine Serie von Avatarportraits mit neutral-weiem Hintergrund. Ich habe also wirklich nur den Avatar zu beschreiben, denn das ist immer derselbe in immer derselben Pose. Bevor ich aber den Avatar beschreiben kann, brauche ich Erklrungen, damit die visuellen Beschreibungen berhaupt verstanden werden knnen. Das heit knapp 2000 Zeichen an Erklrungen, was das eigentlich fr ein Virtuelle-Welten-System ist, noch einmal an die 600 Zeichen, um den Aufnahmeort zu erklren (wo dann zumindest der weie Hintergrund erwhnt wird), und dann ber 9000 Zeichen, um zu erklren, wie in diesen Welten Avatare aufgebaut sind und funktionieren.
Das ist eben das Problem, das ich dadurch habe, da ich keine Real-Life-Fotos poste, sondern Bilder aus virtuellen Welten: Darber wei niemand irgendwas. Von 200.000 Fediverse-Nutzern wei einer auch nur, da es das Virtuelle-Welten-System, das ich nutze, auch nur gibt. Ansonsten mu ich alles bis auf die Grundlagen erklren.
Und blinde oder sehbehinderte Menschen wissen nicht, wie irgendetwas in diesen Welten aussieht. Woher sollen die das auch wissen Wenn ich also erwhne, was auf einem Bild ist, dann mu ich immer davon ausgehen, da jemand Blindes oder Sehbehindertes sich selbst oder mich fragt: "Okay, aber wie sieht das aus Wieso beschreibst du nicht direktweg, wie das aussieht"
Es gibt ja durchaus auch immer wieder Sachen auf meinen Bildern, die es in der realen Welt nicht gibt. Und die mu ich fr Blinde, Sehbehinderte und Sehende erklren, weil die nicht wissen, was das ist, und fr Blinde und Sehbehinderte visuell beschreiben, weil die nicht wissen, wie so etwas aussieht.
Ich knnte beispielsweise schreiben (ich poste normalerweise auf Englisch):
To the right of the avatar, there is an OpenSimWorld beacon.

Weit du, was ein OpenSimWorld-Beacon ist, was der macht, wofr der da ist Weit du, wie ein OpenSimWorld-Beacon aussieht
Weit du wahrscheinlich alles nicht. Wei annhernd niemand da drauen.
Also schreibe ich lieber das hier (und das ist ein Ausschnitt aus einer tatschlichen Langbeschreibung in ):
In front and partly to the right of the conifer, there is an object which doesn't exist in real life, but which is typical for OpenSimulator: an official OpenSimWorld beacon of the latest generation, but modified to fit the style of the sim.
This particular device has a shiny black foot with a long rectangular footprint which is about 80 percent as high as it is deep and tapered upward, and which has rounded edges. It carries the less shiny main body of the device. It starts narrower than the top surface of the foot in all directions. From bottom to top, it first protrudes forward and immediately increases in depth and slightly and curves backward and continues in a straight slope which still goes more upward than backward. Eventually, it curves upward and ends in a slim, rounded top. Transversally, it keeps the same width all the way. Both sides are carved out and illuminated, normally in cyan, here in almost white. Otherwise, it comes in its standard dark grey. However, it's actually a brownish anthracite grey, and the very top shows some light blue, so while it clearly hasn't received the monochrome treatment all over, a closer look also reveals that it should have. The same goes for the foot which is slightly bluish.
The straight section of the main body carries a shiny black frame with the central element of each OpenSimWorld beacon: the touch display with a ratio of 4:3. When not in use, this specimen shows the standard idle screen, only that it was modified to monochrome. Slightly above the middle, there is the official OpenSimWorld logo, namely the word "OpenSimWorld" itself with no actual caps. However, the "O" at the beginning is replaced with a circle matching the rounded sans-serif typeface which contains a stylised globe tilted to the left by an angle similar to Earth's inclination and showing three parallels and two meridians, but no land underneath. The last five letters, "world", are darker than the rest. Below it, in the same typeface, but in an even lighter grey, and without caps again, but a bit smaller, "teleporter" is written. Both lines also have shaded outlines that make them appear imprinted.
Further below, "Click for destinations" is written, still in the same type face and in about the same shade of grey as "OpenSim" above, but small enough to appear shorter than "teleporter" above. The background of the screen is a very light grey on the top 35 percent, medium grey on the bottom 35 percent and a gradient between the two. Clicking the screen breaks the monochrome theme, though, because the user interface which then appears has not been modified.
Lastly, there's a light grey panel on the front side of the foot which is scripted, too. It has "Like or comment this region" written on it in two lines in the same typeface as the writing on the touch screen, but with medium grey outlines. On the left, there is a medium grey thumb-up symbol, and on the right, there is a speech bubble with three dots in it in two shades of medium grey.
An OpenSimWorld beacon serves several purposes. For one, it transmits information about the sim to . This information includes not only the name of the sim and whether it's currently online, but also how many avatars are currently on the sim. The identities of these avatars are not transmitted, only how many they are. This makes finding sims with activity on them easier for users who want to go to parties or otherwise get into contacts with others, for OpenSim's general population density is much, much lower than Second Life's. This feature also helps generate rather controversial statistics about how popular any given sim is.
OpenSimWorld itself can be seen as the third-party centre of the decentralised Hypergrid. It started out about a decade ago as a sim catalogue, making navigating the Hypergrid and finding places much easier and more convenient than previous solutions like teleport stations or simply exchanging landmarks. Sims must be listed manually by registered users, and they need one OpenSimWorld beacon in-world. For example, is the entry for Black White Castle.
In addition, OpenSimWorld offers discussion forums, user-created information and discussion groups for various topics, announcements of in-world events, information about free or paid land rentals other than whole sim rentals by grids, a catalogue for in-world scripts etc.
The other purpose of an OpenSimWorld beacon is as a teleporter which gives you access to currently about 1,700 sims all over the Hypergrid by means of a crowd-sourced sim list, namely that on OpenSimWorld itself. If you click the touch screen, it shows a list with the ten sims known to OpenSimWorld with the most avatars on them. Each sim is listed with its activity ranking, its name, the letter "A" in square brackets if it is Adult-rated and the number of avatars on it. The list can be navigated page by page with always ten sims on them. However, while it gets the information it shows directly from OpenSimWorld, it doesn't show any further information, not about the sim and not about whatever event may be on-going on any given sim. Clicking on a listed sim will immediately teleport you there, but it won't tell you what the place is where the beacon is taking you.
After a while of inactivity, the touch screen switches back into its idle mode.
Clicking the panel on the foot leaves a like on the entry of the sim.
The shadow of the tallest mountain pine on the left-hand side of the pathway is cast on the OpenSimWorld beacon.

Und auch das wrde nicht funktionieren, htte ich nicht am Anfang der Langbeschreibung erklrt:
OpenSimulator is a free, open-source, cross-platform server-side re-implementation of the technology of . The latter is a commercial 3-D virtual world created by Philip Rosedale, also known as Philip Linden, of Linden Labs and launched in 2003. It is a so-called "pancake" virtual world which is accessed through desktop or laptop computers using standard 2-D screens rather than virtual reality headsets. Second Life had its heyday in 2007 and 2008. It is often believed to have shut down in late 2008 or early 2009 when the constant stream of news about it broke away, but in fact, it celebrated its 20th birthday in 2023, and it is still evolving.
, OpenSim in short, was first published in January, 2007. Unlike Second Life, it is not one monolithic, centralised world. It is rather a server application for worlds or "grids" like Second Life which anyone could run on either rented Web space or at home, given a sufficiently powerful computer and a sufficiently fast and reliable land-line Internet connection. This makes OpenSim as decentralised as the Fediverse. The introduction of the Hypergrid in 2008 made it possible for avatars registered on one OpenSim grid to travel to most other OpenSim grids.

Insgesamt ist diese Langbeschreibung brigens ber 25.000 Zeichen lang. Und das ist nicht mal eine meiner lngsten.
# # # # # # # # # # # # # # Na ja, Kunst ist es nicht. Die Bilder sind illustrativ oder informativ oder beides, manchmal auch unterhaltsam.
Aber warum auch immer ich sie poste, ich rechne immer damit, da jemandem der Kontext komplett schnuppe ist. Immerhin sind das Bilder aus einer vllig unbekannten Welt, ganz zu schweigen vom bildlichen Beweis, da vllig unerwarteterweise virtuelle 3-D-Welten eben nicht tot sind. Dann wird sich nicht nur angeguckt, was im Kontext wichtig ist, sondern dann wird auf Entdeckungsreise gegangen und sich alles angeguckt, je nachdem, wie detailliert das Motiv ist.
Na ja, und Barrierefreiheit und Inklusion bedeutet ja letztlich, da Blinde oder Sehbehinderte exakt dieselben Chancen haben mssen, exakt dieselben Dinge zu tun wie Sehende. Das heit, sie mssen auch die Chancen haben, meine Bilder genau so zu erleben wie Sehende, egal, wie Sehende sie erleben. Alles andere wre ableistisch.
Allerdings beschreibe ich meine Bilder unabhngig von der Bildauflsung. Ich schreibe also nicht, was ich auf dem Bild sehe, sondern, was ich in-world sehe. Und da sehe ich alles mit einer sehr viel hheren Auflsung und kann auch mal an Hindernissen vorbeigucken.
Zum einen kann ich nicht davon ausgehen, da alle, die blind geboren wurden, ein Konzept von Bildauflsung haben. Wenn ich dann schreibe, da das Bild zu gering aufgelst ist, um an einer Stelle mehr Details zu zeigen, dann fhlt sich das immer an, als wollte ich mich aus der Affre ziehen. Also wird auch schon mal ein Erdbeercocktail als Erdbeercocktail beschrieben, auch wenn er im Bild selbst nur in Form von vier Pixeln zu sehen ist.
Zum anderen gibt es die Regel, da alles an Text in einem Bild wortwrtlich zu transkribieren ist. Als diese Regel aufgestellt wurde, wurde dabei allerdings nur an so Sachen wie abfotografierte Poster oder Screenshots von Social-Media-Posts gedacht. Das heit, es gibt keine Klausel darber, wie mit Text zu verfahren ist, der im Bild nicht lesbar ist, von dem, wer auch immer das Bild beschreibt, aber ganz genau wei oder in Erfahrung bringen kann, was da geschrieben steht.
Folglich mu ich die Regel so auffassen, da alles, was sich innerhalb der Umgrenzungen des Bildes befindet und auf irgendeine Art Text ist, transkribiert werden mu. Dann werden auch schon mal 20, 30 oder mehr einzelne Textschnipsel transkribiert und ggf. hinterher noch bersetzt, von denen dann nur zwei, drei einzelne Grobuchstaben wirklich lesbar sind. Auch wenn einiges an Text so winzig ist, da es schon der Bildbeschreibung bedarf, um berhaupt zu wissen, wo diese Textschnipsel eigentlich auf dem Bild sein sollen, werden sie trotzdem transkribiert.
Eigentlich ist das einfach nur stures Einhalten von Regeln, ohne ihre Anwendbarkeit zu hinterfragen. Aber weil gerade auch auf Mastodon gern auch Sehende Bildbeschreibungen lesen (wenn sie die Geduld haben), knnen auch sie von den Transkripten profitieren, weil nicht einmal das beste Adlerauge diese Textschnipsel lesen kann.
Letztlich habe ich hier die Erfahrung gemacht: Frag Mastodon-Nutzer, ob sie Zusatzinformationen lieber sich selbst raussuchen, aus verlinkten externen Quellen beziehen oder direkt im Post bekommen wollen. Dann werden sie nach meiner Erfahrung das Letztere bevorzugen. Also werden sie es auch bekommen.
# # # # # # # # # # # # # # Meine eigenen Bilder sollten eigentlich fr Blinde und Sehbehinderte auch nicht von Interesse sein. Das ganze Thema sollte sie nicht interessieren, weil es ein sehr visuelles Thema ist. Es sind nmlich Renderings aus sehr obskuren virtuellen 3-D-Welten.
Trotzdem betreibe ich einen riesengroen Aufwand, um meine Bilder zu beschreiben, um auch wirklich jeden auf jedem Wissensstand abzuholen, damit auch wirklich jeder meine Bilderposts und meine Bildbeschreibungen versteht, ohne noch irgendwas googlen oder mich irgendwas fragen zu mssen. Ich will, da Blinde und Sehbehinderte meine Bilder mindestens so gut erleben knnen wie Sehende, und zwar egal, wie Sehende sie erleben, egal, was Sehende sich in meinen Bildern wie genau ansehen.
Meine Zielgruppe fr meine Bilder und meine Bildbeschreibungen umfat jeden, der zufllig ber meine Bilderposts stolpert. Und ich will, da jeder ohne fremde Hilfe mit meinen Bildern und mit meinen Bilderposts sofort zurechtkommt. Und so habe ich auch schon mal zwei volle Tage von morgens bis abends gebraucht, um ein einziges lumpiges Bild zu beschreiben.
Einige sagen: "Irgendwas ist besser als gar nichts." Aber nicht jeder sagt das. Andere Leute pochen nmlich auf grtmgliche Qualitt, sehr hohen Detailgrad und das perfekte Einhalten aller Regeln fr Bildbeschreibungen und Alt-Texte.
Problematisch wird es fr mich, wenn es darum geht, ausnahmslos jedes Bild 100% barrierefrei zugnglich fr ausnahmslos jeden Nutzer da drauen zu machen. Einerseits gibt's Leute, die mit lngeren Beschreibungen nicht klarkommen. Andererseits gibt's Leute, die sehr umfassende Erklrungen brauchen und sich die nicht zusammensuchen wollen. Und Informationen auerhalb des Post, in dem das Bild ist, sind mglicherweise zu schwer zugnglich, also mssen alle Informationen in den einen Post rein. Ich kann aber den vollen Informationsgrad von ber 60.000 Zeichen (und ja, ich habe schon mal ein Bild mit ber 60.000 Zeichen beschrieben) nicht auf maximal 200 Zeichen eindampfen.
Deswegen hatte ich schon die Vermutung, da es schlicht und ergreifend Bilder gibt, die an und fr sich nicht barrierefrei sind, weil es unmglich ist, sie perfekt barrierefrei zugnglich fr absolut jeden zu schreiben. Wenn dem so wre, drften solche Bilder nicht ins Fediverse gepostet werden, jedenfalls nicht so, da sie irgendjemanden auf Mastodon erreichen knnten.
# # # # # # # # # # # # # # # # #






Find Tour Guide!