Find the latitude of any place.  

Pipeline release! nf-core/mag v3.1.0 - mag

So you think AI is always better at describing and even explaining any image out there than any human, including insiders and experts on the very topic shown in the image 100% accurately And at a higher level of detail than said humans
Well, .
I'd like to see AI identify the place shown in the image as the central crossing at BlackWhite Castle and identify BlackWhite Castle as a standard-region-sized sim on Pangea Grid, a virtual world or so-called "grid" based on OpenSimulator.
I'd like to see AI explain the above, all the way down to a level that can easily be understood by someone who has only got a rough idea about what virtual worlds are.
I'd like to see AI correctly mention the pop-cultural links from this sim to German Edgar Wallace films and Frhstyxradio that are obvious to me.
I'd like to see AI correctly identify the avatar in the middle. By name. And know that identifying the avatar is appropriate in this context.
I'd like to see AI know and tell the real reason why the avatar is only shown from behind.
I'd like to see AI recognise that the image was not edited into monochrome, but it's actually both the avatar and the entire sim with everything on and around it that's monochrome.
I'd like to see AI transcribe text that's unreadable in the image. 100% accurately verbatim, letter by letter.
I'd like to see AI identify the object to the right and explain its origin, its purpose and its functionality in detail.
I'd like to see AI discover and mention the castle in the background.
I'd like to see AI accurately figure out whether it's necessary to explain any of the above to the expected audience and, if correctly deemed necessary, do so. And explain the explanations if correctly deemed necessary.
I'd like to see AI know and correctly mention which direction the camera is facing.
Finally, I'd like to see AI automatically generate two image descriptions, a full and detailed one with all explanations and a shorter one that can fit into 1,500 characters minus the number of characters necessary to mention the full description and explain its location.
, I did all of the above. And more.
In fact, if AI is supposed to be better than me, I expect it to identify all trees in the image, not only the mountain pines, to give an even more detailed description of the motel advertisement and to give a much more detailed description of the map, including verbatim transcripts of all text on it and accurate information on what place is shown on the map in the first place.
If AI is supposed to be better than me, I expect it to

All, by the way, fully automatically with no human intervention except for maybe a simple prompt to describe the image for a certain Fediverse project.
#Long #LongPost #CWLong #CWLongPost #OpenSim #OpenSimulator #Metaverse #VirtualWorlds #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #A11y #Accessibility #AI #AIVsHuman #HumanVsAIAllegedly, a "good" advice for image descriptions is always to describe images like you'd describe them to someone on a landline phone.
Sorry, but that's non-sense. At least for anything that goes significantly beyond a real-life cat photo.
If you describe an image through a phone, you describe it to one person. Usually a person whom you know, so you've at least got a rough idea on what they need described. Even more importantly, you can ask that person what they want to know about the image if you don't know. And you get a reply.
If you describe an image for a public Fediverse post, you describe it to millions of Fediverse users and billions of Web users. You can't know what they all want, nor can you generalise what they all want. And you can't even ask one of them what they need described before or while describing, much less all of them. In fact, you can't ask at all. And yet, you have to cater to everyone's needs the same and throw no-one under a bus.
If I see a realistic chance that someone might be interested in some detail in one of my images, I will describe it. It won't be in the shorter description in the alt-text instead, it will be in the long description which I've always put directly into the post so far, but whose placement I'm currently reconsidering. If something is unfamiliar enough to enough people that it requires an explanation, I will explain it in the long description.
Right now, only meme posts are an exception. They don't need as much of a visual description as long as I stick to the template, and a poll has revealed that people do prefer externally linked third-party explanations over my own ones blowing the character count of the post out of proportion. This is the one time that I can safely assume that I actually know what most people want.

#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #Inclusion #A11y #Accessibility

Pipeline release! nf-core/mag v3.1.0 - mag 3.1.0 - Pink Panda!

Please see the changelog:

-read-sequencing -sequencing

To put it bluntly, Solid is Tim Berners-Lee's attempt at re-inventing wheels named Hubzilla and nomadic identity.
Unlike Solid, Hubzilla is neither a vague concept, nor is it in its earliest alpha stages of development. Hubzilla was launched as early as 2015, it has just released version 9.4.1. It is based on something called the Red Matrix from 2012. The idea of nomadic identity itself is from 2011. So it's a tried-and-true solution and well-established. Not to mention that all this was created by a protocol developer, Mike Macgirvin .
When Tim Berners-Lee invented Solid, he didn't know about the Red Matrix or Hubzilla or the concept of nomadic identity because none of that stuff had been advertised anywhere.
He has learned about Hubzilla and nomadic identity in the meantime. But now it's too late to stop, abandon Solid and support Hubzilla and its successors instead. So he keeps pushing Solid forward, knowing pretty well that he's directly competing against Hubzilla and its successors, the streams repository from 2021 and Forte which was created in July.
Truth be told, Solid can't hold a candle to Hubzilla and its successors. Hubzilla and (streams) are both still based on protocols of their own while also being able to communicate through ActivityPub optionally. But on (streams), nomadic identity via ActivityPub seems to be already implemented. And Forte only uses ActivityPub anymore while still being fully nomadic.
And so it can only survive by advertising it to the general public before they learn about Mike Macgirvin's creations. And about 75% of the Fediverse has never even heard of the name "Hubzilla", including many developers. The successors are even more obscure.
Unfortunately, Mike Macgirvin officially retired from Fediverse development on August 31st. While he is still occasionally submitting code to both the streams repository and Forte, the times when he worked on them just about everyday are over. And unlike Hubzilla which has been passed on to the community in 2018, the streams repository is struggling to find new main maintainers, Forte even more so because it doesn't have any users yet AFAIK.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #Solid #RedMatrix #Hubzilla #Streams #(streams) #Forte #NomadicIdentity It's in maintenance mode, .
I believe so. Last I heard someone inherited the code base and its repo, without necessarily wanting to, and recently shut it down (IIRC). Meanwhile the old flagship instance and domain firefish dot social has been dead a long time.

Yup. Kainoa, original developer of Firefish, dropped the project into 's lap, but none of the official servers.

Calckey died when Kainoa turned it into Firefish.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Calckey #Firefish

Pipeline release! nf-core/taxprofiler v1.2.0 - v1.2.0 - Bouncy Basenji 2024-10-03!

Please see the changelog:

-reads -classification -profiling

Da zuviele Radfahrer da drauen gleichgltige, ignorante Arschlcher sind, die das Fahrrad nur als "mechanische Gehhilfe" sehen, fr die keine Verkehrsregeln gelten, weil fr Fugnger ja auch keine gelten. Und Regeln fr ein gutes Miteinander schon mal erst recht nicht.
Klingeln mu man ja angeblich nur, wenn man gar nicht durchkommt, damit die Fugnger, die vor einem laufen, zur Seite gehen. Also: Wenn man durchkommt ohne Klingeln, dann klingelt man nicht, weil man keinen Bock drauf hat. Auch wenn man bei ber 30 km/h mit dem Lenkerende zwei Zentimeter neben einem Fugnger vorbeirauscht.
Eine Glocke kostet auerdem nur Geld und ist Mehrgewicht. Und ein Fahrrad im Wettbewerbstrimm, also bis auf Bremsen ohne jegliches Anbauteil, das fr den Straenverkehr eigentlich Vorschrift ist, ist sehr viel cooler als eins, das den Vorschriften entspricht. Und leichter und damit potentiell noch schneller.
Da man Leute vielleicht auch mal auf sich aufmerksam machen sollte, das geht vielen einfach nicht in den Kopf. Deswegen fahren ja viele auch stndig ohne Licht. Sie glauben, der Scheinwerfer ist nur zum Ausleuchten der Strecke da. Aber: "Ich seh doch was, ich brauch kein Licht."
Was denen nicht in die Birne geht, ist, da die Beleuchtung am Rad noch viel mehr dafr da ist, gesehen zu werden. Wenn sie so rumfahren, wie sie rumfahren, ohne Licht, ohne auch nur einen einzigen Reflektor und dann auch noch in dunklen Klamotten auf einem dunklen Rad, dann sind sie eben fr Autofahrer kaum zu sehen.
Und wenn sie dann ber den Haufen gefahren werden, dann motzen sie ber die bsen Autofahrer. Genau deswegen fahren diese Vgel brigens konsequent auf dem Brgersteig, wenn kein Radweg da ist: Sie hatten schon oft genug gefhrliche Situationen mit Autofahrern, schieben die Schuld aber immer auf die Autofahrer.
#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #RadfahrerThis, by the way, is a very good litmus test for whether someone who is involved in virtual worlds in some way actually knows a thing or two about the virtual worlds:
What kind of avatars, do you think, will people want to create Digital copies of themselves Or, given the opportunity to do so, something entirely different
Brand-new relevant rant article by myself:
Also, relevant shared post with a link to New World Notes:
Watch: Zuckerberg Unveils Horizon Worlds Updates: "Improved" Avatars (Except Not), and Hyperlink-Like "Primitives" (Sounds Familiar!)
#Long #LongPost #CWLong #CWLongPost #SecondLife #OpenSim #OpenSimulator #Horizons #HorizonWorlds #Metaverse #VitualWorlds #Avatar #Avatars I would give an extensive, detailed, accurate truthful visual description anyway. "Extensive" as in "absolutely gargantuan like you wouldn't believe" as I always do for this kind of image.
I might add for deaf-blind users that nothing in the image is real, physical and tangible. But I guess they can figure it out themselves.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #DeafBlind #A11y #Accessibility That'd still be making stuff up. I only describe what's actually really there, and what everything actually is like. You can't touch them, you can't feel them, and "what if" has no room in my image descriptions.
Besides, no, I don't know what unreal surfaces would feel like, also because they lack any and all properties of real-life surfaces.
And if I had to think up what literal dozens of different surfaces in one image feel like, it'd take me even longer to write my image descriptions. And it already takes me from several hours upward to describe one image.
#Long #LongPost #CWLong #CWLongPost #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #VirtualWorlds #A11y #AccessibilityIn an ideal Fediverse, I'd provide at least four to six different descriptions per original image so that as many people get a description that's as close to what they need as possible. And I'd link to them, even though that makes things more inconvenient for reasons I've already explained, and hope I'll get away with it.
Of course, this isn't feasible.
Still, I do have to find a way to satisfy as many people on Mastodon specifically as possible. Even if they aren't interested in what I write about, they may still cut into my reach.
It's incredibly difficult to get away with sub-standard accessibility for images on Mastodon. You can quickly be ostracised for not providing an alt-text as well as for providing an alt-text, but it isn't useful enough because it carries too little information about the image.
This starts at being lectured about having to provide good alt-text. It continues via refusals to boost posts with undescribed images. And it ends at people who post images without describing them being muted or even blocked. If your only connection on a particular instance blocks you, you also disappear from the instance's federated timeline, and your posts may no longer be delivered to those on that instance who follow a hashtag that you use.
Besides, reputation matters. If you don't take accessibility by Mastodon's standard seriously enough, and your posts appear on Mastodon often enough, you're likely to earn a bad rep as someone who's lazy and careless and basically ableist. Even useless alt-text is ableist by Mastodon's standards.
I already have a very active member of Mastodon's alt-text police among my connections, so I can't neglect this.
On the other hand, I don't think you'll get a bad rep for trying too hard, for doing over 1,000% of what's required and still trying to improve and optimise and max out your accessibility game.
What I can say is that I'm not constantly being scolded for giving too long and too detailed image descriptions, nor am I scolded for parking them in the post. I guess my alt-texts already keep the alt-text police satisfied for now, and they should still know that if my alt-texts aren't sufficient, there's still the long description.
The
"I have no interest in your topic, and don't even read your posts, but you violated this rule that I arbitrarily made up, and I will force you to change even if your actual followers don't even want it."
people may still cut into my reach, so I can't ignore them. Besides, they don't enforce rules which they themselves have just pulled out of thin air right there right then. They enforce rules which the greater Mastodon community has already firmly established, and which Mastodon is expected to live by. And when I say "Mastodon", I mean the typical Mastodon user's perception of Mastodon which includes everything that happens on Mastodon, regardless of where it comes from.
You aren't exempt from these rules just because you're on Hubzilla. Besides, Mastodon users can't see where you are anyway. So they'll assume you're on Mastodon unless they know better.
I'd say my current ways of describing and explaining my in-world images have been working quite well. I usually don't get any feedback, but this also means I don't receive any negative criticism about neglected accessibility.
If I were to do any major changes such as moving the full descriptions into linked external documents, it'd be a gamble.
For one, the technical side is largely untested. All I know is that a blind user once told me that a Hubzilla Article didn't work with her screen reader, so Hubzilla Articles may actually not be accessible at all. I'm not going to post the majority of my future images on Hubzilla anyway, only on (streams) which can make Mastodon blank them out, and (streams) doesn't have the Articles app.
So all I could do would be a simple HTML document uploaded to the Files app that contains the full description, maybe with the described image embedded as you've suggested. However, I don't know how which browser on which OS handles an HTML document dealt to it not by a Web server, but by a file server. Will it open and display the document, or will it download the document to the device like any other file without opening it
Besides, what else is untested is the acceptance of linked image descriptions, especially when they're the only available source of transcripts of the bits of text in an image. After all, this means that something that's defined as absolutely mandatory, the transcripts, is not available in the post at all, only in some external document that requires extra effort and, on mobile devices, an extra app to open.
Externally linked descriptions may improve the acceptance because they drastically shorten the image posts, and they make image posts with multiple images within the post text possible in the first place.
But they may just as well have the opposite effect because they remove a detailed image description from the post as well as any and all explanations and, worst of all, every last text transcript, and instead, they park everything someplace that requires extra effort and, on phones, letting another app open to access them.
Yes, Mastodon doesn't like long posts. Some Mastodon users actually block everyone whom they catch posting over 500 characters upon first strike. But this can be partially mitigated with a long-post content warning.
And there are way more complaints about missing or useless image descriptions than about too long posts. So I dare say the former cut into your reach more deeply than the latter.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta Well, it isn't just about the size and shape. That'd be easy because these virtual worlds consist of 3-D models.
It's more about what it feels like to touch someone's skin, a piece of someone's clothing, the surface of an object. It's that kind of tactile information that many deaf-blind people go after, and it's only that kind of tactile information that they're interested in.
But how am I supposed to describe what it feels like to touch something that can't be touched because it simply doesn't have a physical real-life existence
Okay, so that avatar is wearing a tweed jacket. But that tweed isn't really tweed fabric. It's a digital painting of tweed fabric on an object which actually has no material whatsoever and no physical properties whatsoever. You can't reach out your hand and lay your fingertips on that jacket and feel it.
Even if that jacket's surface was bump-mapped or normal-mapped, which it isn't, it still couldn't be touched and felt.
Now, someone could suggest I should describe what things would feel like if they were actually real in the physical realm. But I prefer my image descriptions to be accurate and, most importantly, truthful.
And besides, what about objects that do not have a texture that represents any real-life material Not only are they without any physical properties, but they don't even suggest any life-like physical properties. Even I don't know what it'd feel like to touch them.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #VirtualWorlds #DeafBlind #A11y #AccessibilityStill, my intended target audience is very widespread, regardless of whether I'm posting about OpenSim, or whether I'm posting about the Fediverse.
I mean, sure, I could write my OpenSim posts in such a way that only those who know OpenSim inside-out because they're active users understand them. It'd be easier for me.
But there are probably fewer than 50 registered users in the Fediverse who know more about OpenSim than that it exists, if even that. Of these, maybe half a dozen is active. That wouldn't exactly be a big target audience.
At the same time, there are lots of people out there who receive my posts, too, be it by following me, be it by following one of the hashtags I use, be it by discovering one of my posts on their federated timeline.
Now, these people may pretty well have their curiosity piqued upon receiving one of my posts with an image from within a virtual world, the kind of which they would never have imagined exists since the Metaverse hype ended.
At the same time, the post doesn't tell them anything about this kind of world. Nothing is explained, and nothing is described, or if something is described, it's the very bare minimum requirement for the most basic accessibility. Even if they're sighted, they'd really like to know what kind of place this picture is from, and what the picture actually shows. There may be items in the picture that they've never seen before.
It's even worse if they're blind. Sure, you may say that blind or visually-impaired people have no use for virtual worlds, thus, literally not a single one of them can even be interested in that topic.
I wouldn't count on that. I'd say a blind or visually-impaired Fediverse user may just as well be curious about virtual worlds as a fully sighted user, including what everything looks like. If they are, they've got the same right to learn about it as a fully sighted user. Everything else would be ableist.
Both need my help. No-one else can really help them. No, not Google either. Not on this level of detail, and I'm not even talking about the first hits for OpenSim probably leading them to the human body simulation of the same name.
Both need explanations. And the blind or visually-impaired user needs visual descriptions. Only I can provide them on a sufficiently detailed and sufficiently accurate level.
Not providing sufficient description and explanation for image posts on this level of obscurity is about on the same level as not providing any image description at all whatsoever for images in general. Hubzilla doesn't care. Mastodon, on the other hand, calls that ableist. And Mastodon calls it out.
My target audience for posts about the Fediverse, including memes, is even wider. It's basically everyone, especially my target audience for Fediverse memes.
But not everyone is on the same level of knowledge. And if an image is included which is almost always the case in my meme posts, I have to describe the image anyway.
I guess you've read that shared post by . Image descriptions are not only good for blind or visually-impaired people, but they're also good to help sighted people understand an image which they wouldn't understand without the description.
So I describe and explain the meme image.
But explanations don't belong into alt-text. I've already written about that.
So I describe the meme image in the alt-text, and I explain it in the post where explanations belong.
Sure, I could say, "If they don't get it, they don't get it. I don't care." But doing so would make these posts inaccessible to the vast majority of their intended target audience.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta

From 28 Sep: Can addressing gut issues treat long COVID in children - EnlargeFrazao Studio Latino/ Getty Images Four years after the outbreak of the ... -19 -covid

Dolphins In Depth: Can Tyler Snoop Huntley save the Dolphins season

Scott M. Stolz First of all: If your target audience is mainly on Mastodon, and you don't want to lose any more reach than necessary, you have to play by Mastodon's rules, whether you want or not.
My target audience is mainly on Mastodon, and I've already lost too much reach.
This means three things among several others.
One, there must be a sufficiently useful image description in the alt-text, full stop, no exception. For at least some Mastodon users, it doesn't matter how many image descriptions you've got elsewhere. Especially not if you have to hide your post behind a summary/content warning because it exceeds 500 characters, and they'll see your image before your post because they're still on Mastodon 4.2 or older which keeps images outside of content warnings.
Believe me, I spend more than enough time lurking on mastodon.social to know.
Two, if you put a post with more than 500 characters out in the open with no content warning to hide it, droves of Mastodon users will block you. Do that often enough, and it'll dramatically cut into your reach on Mastodon. Remember, Mastodon is your target audience.
Three, you must protect people from potential triggers and sensitive images. On Mastodon, that isn't limited to porn and gore. It includes eyes. Eyes can trigger autists and other neurodiverse people. Not only in facial portraits of people looking straight into the camera, by the way.
Point three, by the way, means you can't safely post pictures with faces or eyes in them from Hubzilla to Mastodon. Hubzilla has no way of making Mastodon blank sensitive images out.
A summary/content warning only helps on Mastodon 4.3 which hides images under content warnings. But Mastodon 4.2 or older, which means one solid half of the whole Fediverse, only hides the post text under a content warning and leaves the images dangling below the hidden post. And when a post comes from outside, the images are not blanked out. They remain in everyone's plain sight.
My meme images almost always have faces in them. The OpenSim in-world pictures I want to post are likely to have faces in them, too. So posting them on Hubzilla is out of question.
I've launched two (streams) channels for these while you were away:

(streams) automatically sets Mastodon's proprietary sensitive flag on images if they're included in a post that contains the hashtag #nsfw or #sensitive. Hubzilla doesn't have this feature.
However, (streams) neither has the Articles app, nor does it have the Webpages app. If I wanted to use either, I'd have to post images on (streams) and then link to an article or a webpage on this very Hubzilla channel.
Now to some of your specific points.
Image
ALT tag ONLY if different than the short description.

Again, alt-text always. Unless you don't care for your audience on Mastodon. Again, I do.
So the short description under the image is redundant.
You don't have to repeat what the text says in the meme.

The transcript of the meme text belongs into the alt-text. Any text transcripts belong into the alt-text.
The only exception in my case are if there's too much text, and it doesn't fit into the alt-text. In this case, the transcripts go into the long description, and the alt-text mentions that the long description contains all text transcripts.
But the alt text would need to be longer since it would need to explain what the meme is. The alt text and the short description would not be the same.

Alt-text must never explain anything. No information must ever be exclusively available in alt-text.
That's because not all people can access alt-text.

This means that explanations must be in a place that can be accessed by anyone who can access the post itself. This applies to anything that's part of the post text body, for example.
image (alt text = "An image about topic."

Nitpick, maybe, but starting alt-text with "image about", "picture about" or "photo about" = baaaaaaaaaaaad style. Other media are at least tolerated, but "image", "picture" and "photo" are deadly sins.
Basically, when you have super long descriptions (over 500 to 1000 characters), you should treat it as if you are promoting a blog post on social media (rather than the whole thing only exists on social media).

My image descriptions don't make for good blog posts.
I have put image descriptions into articles twice already:

You can see that they don't work as stand-alones that I could show random strangers on the Web.
If I were to do it again, this would be the style, only with a more up-to-date description.
Speaking of which, let me show you how I currently actually handle them.
First of all, some meme posts.
My first attempt at an updated meme post was in early August.
Back then, I was still convinced that people find links to external information sources cumbersome and inconvenient to the point of being not accessible and uninclusive, and that they wanted everything explained in the post.
So I did. And I ended up with a whopping nine explanations:
All so that everyone in the Fediverse would easily have a chance to understand the meme post.
However, these nine explanations are over 25,000 characters long. The last two explanations are half of it.
I couldn't really imagine that everyone actually wanted to wade through so much text in one place. I couldn't really imagine that the six meme explanations in the post should be more accessible than which, through its own links, would explain stuff like snowclones as well.
And so I ran a couple of weeks ago. Call the poll rigged because I mentioned which way of explaining a meme would end up with how many estimated characters. I call it honest.
Without the character count, I guess just about everyone would have wanted me to explain everything in the post. That is, Mastodon users would have wanted me to explain everything in the alt-text Mastodon does everything image-related in alt-text because that's how Mastodon does it, and because you can't do much with only 500 characters.
But with the character count, nobody wanted me to explain things in the post. Not if that meant 10,000+ characters.
As you wish. So I switched to linking what I could possibly link.
My next meme post, was very simple, so one link to KnowYourMeme for the meme template was sufficient. Otherwise, it went with no explanation at all.
required an explanation for the image itself, but in addition to that, a link to KnowYourMeme was enough.
, my latest meme post, required some more explanation and a few more links. I had to explain the whole situation which the meme references.
The explanation had to include a very brief explanation of (streams) because the link to the streams repository (no. 3 out of a whole lot of links) isn't informative enough for someone who barely knows the Fediverse beyond Mastodon.
Firefish required a short explanation of the situation plus a link to the EOL announcement plus a link to the official Firefish repository plus a link to the official Firefish lighthouse instance.
/kbin required another short explanation of the situation plus a link to the repository to show for how long it has been abandoned plus a link to the dead website plus a link to the dead lighthouse instance plus a link to a post on Lemmy that gives some more insight.
In addition, at the top, there are two links to KnowYourMeme because I've combined two memes.
I could say that I've got meme posts down pat now. It only gets a bit difficult if I have to explain some more in the post because there are no external explanations for certain aspects of the post available. But I don't have to re-phrase entire KnowYourMeme pages anymore since people have voted against it.
My original in-world images are trickier because they work differently.
For one, they require much, much more detailed visual descriptions. Meme posts based on known and established templates hardly require any visual description. My in-world images, on the other hand, I have to re-paint with words because everything in them may be interesting for someone out there.
There also tends to be more text to transcribe. The basic rule is, "If there's text in the image, it must be transcribed." The rule does not handle any edge-cases, so I assume they have to be transcribed, too, if I can transcribe them. And I can. And "important within the context" doesn't apply if the image itself is the context.
Besides, they require much, much more explanation by me. There are far more things about them that the average casual on-looker is unfamiliar with. Thus, they have to be explained. But there are far fewer useful explanations for them readily available on the Web, if any. And before I force people to scrape together useful and comprehensible information from , I rather explain OpenSim myself in a way that people understand right away. And with fewer characters for more useful content.
I've written an article about this: .
My most recent OpenSim image post here on Hubzilla is from around April. It's only slightly outdated.
In numbers, it has

The post is an absolute monster. It's my second-longest post to date. grew even longer at over 78,000 characters I wrote it a year ago, and it contains three highly detailed images. But none of the image descriptions in that post manage to exceed 60,000 characters. And this post only has faint traces of a description preamble. Also, it places each description below the corresponding image.
This excessive size is one of the reasons why I considered parking the description someplace else.
My most recent OpenSim image post went to (streams) two months ago. .
It contains two images, and yet, it's much shorter. I've managed to reduce the necessary description effort by reducing the scenery in both images as much as I possibly could. Also, both images show the same avatar in the same outfit which I only had to describe and explain once.
This time, I put both images below one another in one block. The image descriptions are their own block right below. Also, this time, the preamble with common explanations for both images is the largest part, also because it contains the common description and explanation of the avatar for both images.
In this case, this was simply the easiest way to do it because both images have so much in common. However, this does not translate well to individual image descriptions per image, linked below the images themselves. If I were to do that, each image description would have to contain all necessary description elements and all necessary explanations. It might largely end up copy-paste, but still.
This was easy to do for another reason: The actual post text is only one block. The images are separate from it.
Now, however, I'm planning to write a blog-style post with up to four images within the post text, as in surrounded by post text above and below. I want to write a few things about the images that are not image description in the alt-text/accessibility sense. And that has to be close to each individual image.
This excludes individual long image descriptions per image right below each image. This also makes a big image description block at the end inconvenient because the described images would be so far away from their descriptions.
And that's why I considered putting the raw image descriptions into external documents without, however, completely overthrowing my way of describing images. I've honed it to a high level of quality over almost two years, improving my description style from post to post. And I'm fully aware just how detailed image descriptions for images from virtual worlds have to be.
I just need to find a way to make it work with a blog-style post with several images embedded within the actual post, both without ripping the post apart by placing humongous chunks of image descriptions into it and without placing the descriptions too far from the images within the post.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Memes #Hubzilla #Streams #(streams) #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMetaIch schtze, am liebsten wrde auch Heise seine Instanz dichtmachen und dann immer und berall behaupten, das Fediverse sei ja eh tot, weil es fr "die Leute" viel zu kompliziert ist. Dann bruchte man sich nicht mehr drum zu kmmern und knnte es komplett ignorieren, wie sie es schon seit sptestens 2012 mit diaspora* tun.
Wre da nicht , der sich wirklich engagiert, der auch mal in Dialog tritt, der auch mal experimentiert, der wahrscheinlich alleine fr mehr Traffic sorgt als die ganze Schweizer Regierungsinstanz insgesamt.
#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMetaScott M. Stolz Disadvantages:

It's basically option 3, but even worse.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
Another alternative would be to deactivate activity pub (optional) (because it is toxic) and rely on the Diaspora / DFRN.

This would basically create three parallel "Fediverses".
One, the ActivityPub-based one which could be considered "bugged" and illegal as per GDPR because data sniffers would be everywhere.
This part of the Fediverse would quickly wither away and die for one sole reason: Mastodon, which makes up over 70% of today's Fediverse, is developed in Germany, and mastodon.social, which makes up over 20% of today's Fediverse, is hosted in Germany. Germany is a member of the EU and thus GDPR area. In fact, Germany was a driving force behind GDPR. Mastodon, in its entirety, would basically become illegal in its own country. Unless, of course, Gargron took the step to move Mastodon over to the USA entirely and shutter Mastodon gGmbH.
Two, the "safe" Social Web based on the diaspora* protocol and consisting of diaspora* itself, Friendica, Hubzilla and Socialhome. Many users of the latter three (if Friendica still offers this option, and if Socialhome introduces it) will turn ActivityPub off and cut off all ties to the ActivityPub-based Fediverse.
Three, the "even safer" nomadic grid of Hubzilla and (streams), based on the Zot6 protocol. Both definitely do allow ActivityPub being off, and on Hubzilla, it is off on new channels by default.
I guess both would feel quite some relief when they are no longer bound to a "Fediquette" defined entirely by Mastodon users who barely or not at all know that the Fediverse is not only Mastodon. Especially the several public Hubzilla hubs in Germany, including the two biggest ones, would be quick to turn ActivityPub off on a hub level because leaving it on would be illegal suddenly.
I'm not sure what'd happen to Mike Macgirvin 's latest fork, Forte, which is (streams) with Nomad and Zot6 removed and nothing but ActivityPub. On the one hand, like (streams), it has an unparalleled permissions system. On the other hand, ActivityPub itself would become inherently unsafe. Besides, even something as basic as direct messages aren't necessarily that private if Pleroma admins can choose to circumvent direct message privacy on a per-instance level. And yes, Pleroma has that option.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #ActivityPub #Mastodon #Pleroma #diaspora* #Friendica #Hubzilla #Streams #(streams) #Forte #Zot #Zot6 #GDPR

Dolphins In Depth: Dolphins hosting QB audition, working to find next starter

Find the latitdue and longitude of any place

for Long, : National Weather Service: WARNING in this area until 9:30 PM EDT. Take shelter now in a basement or an interior room on the lowest floor of a sturdy building. If you are outdoors, in a mobile home, or in a vehicle, move to the closest substantial shelter and protect yourself from flying debris. Check media. Source: NWS Charleston SC , ** DO NOT RELY ON THIS FEED FOR LIFE SAFETY, SEEK OUT OFFICIAL SOURCES ***

for Long, : Warning issued September 26 at 9:03PM EDT until September 26 at 9:30PM EDT by NWS Charleston SC Source: NWS Charleston SC , ** DO NOT RELY ON THIS FEED FOR LIFE SAFETY, SEEK OUT OFFICIAL SOURCES ***

for Liberty, , , : National Weather Service: WARNING in this area until 11:45 AM EDT. Take shelter now in a basement or an interior room on the lowest floor of a sturdy building. If you are outdoors, in a mobile home, or in a vehicle, move to the closest substantial shelter and protect yourself from flying debris. Check media. Source: NWS Charleston SC ** DO NOT RELY ON THIS FEED FOR LIFE SAFETY, SEEK OUT OFFICIAL SOURCES ***

for Liberty, , , : National Weather Service: WARNING in this area until 11:45 AM EDT. Take shelter now in a basement or an interior room on the lowest floor of a sturdy building. If you are outdoors, in a mobile home, or in a vehicle, move to the closest substantial shelter and protect yourself from flying debris. Check media. Source: NWS Charleston SC , , , ** DO NOT RELY ON THIS FEED FOR LIFE SAFETY, SEEK OUT O

for Liberty, , , : Warning issued September 26 at 11:11AM EDT until September 26 at 11:45AM EDT by NWS Charleston SC Source: NWS Charleston SC , , , ** DO NOT RELY ON THIS FEED FOR LIFE SAFETY, SEEK OUT OFFICIAL SOURCES ***

Like !


Their interest is hegemony.

They won't be able to achieve that. For that, they'd all have to join forces. Instead, the big players would rather bash each other's heads in.
If Zuckerberg wanted to buy out the Fediverse through the SWF and assume full control over it, that'd go hard against what Flipboard wants, what Ghost wants, what Mastodon wants etc.
If you think Zuck and Gargron will join forces in EEE'ing the Fediverse, I have a bridge to sell you. If Zuck wants to EEE the Fediverse, this includes Mastodon which is why everyone panics. But Gargron himself wants to EEE the Fediverse, too. Mastodon users don't notice it, but everyone else does, from Misskey to (streams). And if he could, that'd include assuming indirect control over Threads by forcing Mastodon's proprietary, non-standard solutions upon it.
Write.as, maintainer of WriteFreely, is a supporter of the SWF, too. WriteFreely is basically one of the Fediverse's alternatives to Medium. Notice how Medium itself is a supporter. Medium won't try to build a monopoly and get rid of WriteFreely through the SWF if WriteFreely itself has a saying in this. And so does Automattic because Medium competes against WordPress.
Mastodon vs Write.as + Medium + Automattic = "The Fediverse is for microblogging, we won't support more than a tiny subset of HTML, and the rest of the Fediverse has to adjust" vs "Everything in the Fediverse must support full HTML rendering for Article-type objects."
And the SWF won't grant voting power according to how much money someone invests into it. Meta won't be able to out-vote everyone else by pumping a billion into the SWF. If that was possible, , as the very founder of the SWF, would have no voting power at all.
Lastly, these are the supporters. The financial backers. Not the only members and even less the only members the SWF will ever have. It's just kind of difficult for a Fediverse project with nothing but one or two spare-time devs behind it to join the SWF.
CC:
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #Mastodon #MetaPlatforms #Threads #Automattic #WordPress #Medium #Write.as #WriteFreely #SWF #SocialWebFoundation #EEE #EmbraceExtendExtinguish And this could actually improve with more disabled Fediverse users because that'd mean more people who could tell us what they really need.
Ideally, they'd also not flock to only Mastodon, stick to Mastodon and largely ignore the rest of the Fediverse. Ideally, some of them would test Friendica and find it enough to their liking to start a public forum about accessibility in the Fediverse because that's something that needs to be discussed in a place that supports actual discussions between more than just two people.
My original images are extreme edge-cases in terms of describing them. My way of describing them is mostly based on assumptions and erring on the "too much" side to be safe. These assumptions include, but aren't limited to:
Based on these assumptions, I tend to spend an eternity describing one single image, and thus, I only rarely post images in the first place. I often don't even take images if I think it's impossible for me to sufficiently describe them. Virtual world pictures come with the disadvantage of showing people whole new worlds and making them want to explore these new worlds through these images, so based on my assumptions, .
For example, I've used my channel picture with that huge, weird building in it before. I went to three or four other places before, and this was the only one that I deemed possible to describe. It still took me two full days to research and describe it. The result was an alt-text of exactly 1,500 characters, including a bit over 1,400 characters of short description, and a full description of over 60,000 characters. And that was even after actually limiting myself to what can be seen if you're in that place, as opposed to describing images within that image with details that could only be seen if you teleported to the places shown in these images.
It's probably hard to say I haven't tried hard enough, but I guess hardly anyone has ever opened the summary/content warning, and hardly anyone ever will. My image descriptions may be useful for a few neurodiverse users who actually need such detailed descriptions and explanations, and they may be good for silencing the "alt-text police" which seems to increasingly go for not only alt-text availability, but also the quality of image descriptions. For most blind or visually-impaired people, they may be of only marginally more use than no image description, if at all.
By the way, I wanted to post another picture of the same building from another angle. The reason why I didn't was not because it would have been too tedious to describe it. It was because there are pictures in that building with human faces on them, and from that other angle, they became visible, for any definition of "visible" anyway. That would have meant a potential eye contact trigger by the definition I know.
So I didn't post that second image because Hubzilla gives me no means of hiding an image from users of Mastodon 4.2 or older or users of many mobile Mastodon apps. I've since started two (streams) channels on which I post images with faces or eyes because (streams) support's Mastodon's non-standard sensitive image flag. And I've considered replacing all my various profile pictures in the Fediverse with pictures with no face in them, although just about everyone else has at least one face in their profile picture.
It's clear that assumptions can be bad both ways. It's bad to assume that colours are useless in image descriptions, but all the things I assume up there lead to image descriptions that may be similarly useless, but due to their overwhelming length.
Disabled people want us to make things accessible to them based on what they actually want, not on what we assume they want. There's that slogan: "Nothing about us without us." They want us to listen to them.
However, it's impossible to listen to them . Right now, they don't, probably also because they're too few, and they're all only sitting somewhere where meaningful group discussions are prevented by technology. And if you have a question about accessibility in extreme edge-cases like mine, searching mastodon.social for blind users and DM'ing them, half a dozen at once, can't be the right way. Unfortunately, it's currently the only way.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #NothingAboutUsWithoutUs #Inclusion #A11y #Accessibility

Yes, s are that and !

I keep seeing painfully useless alt-text again and again myself.
Still, I'm wondering if there's any consent, any standard for alt-text quality that's specific to Mastodon. (I'm saying "Mastodon" because that's the only part of the Fediverse where alt-text matters, by the way.)
Whenever I post an image, rare as this may be due to the immense effort, I try to improve and optimise my way of describing images and making my image descriptions available to Fediverse users.
But there isn't much information available on what people want. I hardly have any sources for such information, I barely get any feedback for my image posts, and that feedback contradicts itself. On top of that, there's absolutely no place in the Fediverse where Fediverse accessibility could be discussed between more than just two users at a time.
So all I can do is try to assume what Mastodon users may want. And I'm very likely constantly stepping on people's toes or even throwing many of those who need image descriptions under buses while doing so.
I typically describe my original images twice, once with a shorter, purely visual description in the alt-text, once with a very long, very detailed description in the post that also contains all text transcripts and all necessary explanations that's also mentioned in the alt-text.
Now that I'm getting closer to finding an optimal way of describing and explaining memes based on established templates, I'm starting to question parts of the above method for my original images.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta Always. No exceptions anymore for I don't know how long. I don't forget to do something that takes hours or days for one image.
They're always 100% hand-written. Maybe copy-pasted from my own descriptions of similar images, but no AI involved.
And I'm constantly trying to improve my description style, so the older an image post is, the less I consider its image description(s) even halfway up-to-date. For example, I'm working on completely re-designing my way of describing and explaining memes.
I'd rather not post images at all than post them with what I'd consider sub-standard image descriptions.
#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta There could be several reasons for this.
One, "All my friends are here." They don't want to let go of large parts of their online social circle.
Alternatively, "But muh followers." They don't want to let go of their fame and move someplace that doesn't even have enough users to build up the same numbers of followers in no time.
Two, inconvenience. The Fediverse is too cumbersome to use in comparison with centralised corporate silos.
You can basically load onto your phone, enter a user name and a password, and bam, you've got a bustling feed.
On the other hand, while you can load Mastodon onto your phone, it asks you too many questions. There's that field where you have to choose an instance. And yes, leaving it on default is a choice you have to take. It doesn't railroad you hard enough.
And once you're online, your timeline is dead. Many have left because they thought the whole Fediverse is dead because their timeline was, because Mastodon was praised to them as, quote, "literally Twitter without Musk". You have to take care of getting content into your timeline yourself, and there's no assistant that helps you with it. Mastodon doesn't have training wheels, everything else in the Fediverse has them even less, but doesn't even require them.
Oh, and you can't load "Fediverse" onto your phone at all. Or anything else in the Fediverse that could largely do the same job as Mastodon if people knew about it.
Third-party apps Most of them have probably never in their lives used an online service through an app that doesn't have the same name as the online service.
Lastly, it isn't worth it. The only thing that may motivate them to move would be a place that's totally guaranteed to be an absolute, 100% safe haven for them with no harassment, no ableism and no hindrances in accessibility. Same goes for other marginalised minorities such as BIPoC or the 2SLGBTQQIA+ community, only that spoons aren't as much an issue for them.
However, these latter two may actually be known even on to not be perfectly safe in the Fediverse either. The very reason why BlackMastodon failed. So neither may be disabled people. Thus, the effort of leaving everything behind and learning to use new software might not be justified by the result if nothing much changes anyway.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Disabled #DisabilityTwitter #Mastodon #Fediverse I myself resort to

As for the last point, I've recently held on this. I wanted to know how and where people wanted meme images to be explained.
I had a few options, including explaining everything, including the explanations themselves, in the post and linking to places with explanations beyond the actual image.
I had read a while ago that people generally prefer everything being explained in the post, right where the image is. That's much more convenient than having to open Web sites that might not even be accessible and constantly having to switch back and forth between your Mastodon app with the image and your crappy mobile browser with some explanations.
to see what'd happen. Of course, only explaining the image wouldn't cut it if there are still things that people might not familiar with, namely the topic of the image (FEP-ef61) and the meme template (One Does Not Simply Walk Into Mordor). So I had to explain these as well. But even then, there were things that needed explanations because many people wouldn't know right off the bat what they are (Hubzilla, (streams), snowclones, image macros, advice animals). So I explained these on top of everything.
I ended up with an explanation for the image, two explanations for the image explanation and six explanations for the two explanations for the explanation. And over 20,000 characters of explanations. I could post these 20,000 characters in one go, and I did so on (streams), I have a character limit of over 24,000,000. But I had my doubts that this was actually what people wanted.
So I held the poll. Since nobody could estimate which option would require how many characters, I added the estimated character counts.
Results:

My verdict is: I'll give those who don't want a full set of explanations in the post what they want. But I will add links to further explanations, and they will work because I've always explained my images in the post.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #Memes Hm...
Firefish is in maintenance mode. Its death has been announced for the end of the year.
Iceshrimp is basically in maintenance mode, too. It's being rewritten from scratch as Iceshrimp.NET in C# as opposed to Misskey's Node.js. It sounds promising, especially because all the old Misskey cruft will be absent from the new code, but it isn't ready yet.
Sharkey has lots of features, but it also seems to be buggy, and its devs are surrounded by a lot of drama.
The new hotness right now is CherryPick, a Sharkey fork. Better development, said to be more reliable than Sharkey, without the drama, and it seems to have even more features than Sharkey.
As for Catodon, itself an Iceshrimp fork, I haven't heard any news about it in ages. It could be that the development is waiting for Iceshrimp.NET to be fully released to rebase to it or something.
#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Misskey #Forkey #Forkeys #Firefish #Iceshrimp #Iceshrimp.NET #Sharkey #CherryPick #Catodon
Wenn es in 14 Jahren (Start Friendica lt. Wikipedia 2010) keine Notwendigkeit fr eine Smartphone App gegeben hat, ist das doch eh ein Soziales Netz von Nerds, die eh nur den ganzen Tag am PC/Notebook hngen, oder

Ich war Anfang der 2010er schon auf Friendica, daher wei ich, wie es damals war.
Mike Macgirvin hat Friendica, das damals noch Mistpark hie, in vier Monaten entwickelt. Vom Mrz zum Juli 2010. Ganz alleine. Ohne Budget. Meines Wissens zu einem stabilen Punktrelease hin. Und Mike ist eigentlich Protokolldesigner. Er hat ja sogar das DFRN-Protokoll von Grund auf selbst entwickelt, auf dem Friendica basierte. Er ist kein Webdesigner und schon gar kein Mobilentwickler.
Zum Vergleich: diaspora* brauchte vier Entwickler, $320.000 an Crowdfunding und anderthalb Jahre fr eine erste Public Beta, die immer noch sehr unfertig war. Und diaspora* kann heute noch weniger, als Mistpark im Sommer 2010 konnte. Auch diaspora* hat keine offizielle App.
Nach einer dedizierten App htte damals auch kein Hahn gekrht. 2010 war es noch lange nicht soweit, da fast jeder ein Smartphone hatte und es als primres oder gar einziges digitales Endgert nutzte. Mit 3,5"-Bildschirmen, schwchlichen CPUs und magerem RAM nicht und mit lckenhaftem UMTS erst recht nicht. Der Standard war weiterhin der PC oder der Laptop zu Hause am DSL-Festnetz.
Damals war es noch lange nicht so, da es fr alles, absolut alles, was es online gab, eine dedizierte Mobil-App gab, geschweige denn so, da es vllig normal war, wenn gewisse Dinge nur ber eine Mobil-App gingen.
Das lag auch an der Smartphone-Landschaft. Das iPhone verkaufte sich besser als alles andere zusammen, hatte aber trotzdem als Zielgruppen nur Hipster, Apple-Fanbois, die vor jedem Gerterelease vorm Flagship Store zelteten, Angeber, fr die der Apfel auf dem Telefon wie der Mercedesstern auf dem Auto war, und Modepppchen, fr die der Apfel auf dem Telefon wie der Prada-Schriftzug auf der Handtasche war.
Android war noch neu und Nische. Fr kommerzielle Entwickler lohnte es sich bekanntlich erst, auch fr Android zu entwickeln, als das Samsung Galaxy S sich alleine besser verkaufte als das iPhone. Im Grunde war Android damals noch der wahr gewordene feuchte Traum vom Linux-Smartphone in Groserie, also das, was man sich von OpenMoko erhofft hatte, was ansonsten eigentlich nur Nokia mit dem legendren N900 in unverschnittener Form anbot.
Vorhandene Gerte waren ansonsten vielfach noch vom einstigen Klassenprimus Nokia und liefen mit Symbian, aber Symbian war tot, in absoluten Zahlen auch schon immer Nische und auch wieder ziemlich fragmentiert.
Das vorherrschende Mobilgert mit Internetzugang war folglich berhaupt keins.
Die Nutzerschaft von Friendica, bzw. wie auch immer es damals jeweils hie, setzte sich berwiegend tatschlich zusammen aus Computernerds. Der Anteil der Linuxnutzer war berdurchschnittlich hoch, zumal nicht wenige ihren Privatnode auf eigener Hardware zu Hause hosteten. Ansonsten gab es noch ein paar linksalternative Aktivisten, die sich wohl erhofften, auf Friendica fr die Behrden noch schwieriger aufzuspren zu sein als auf diaspora*, geschweige denn Facebook.
Groe Zielgruppenberschneidungen zwischen Smartphone- und Friendica-Nutzern gab es nicht. Apple und Friendica hatten fast gar keine Zielgruppenberschneidung, denn Apple stand fr genau den Vendor Lock-In, den Friendica durchbrechen wollte. Eine dedizierte iPhone-App fr Friendica htte von den damaligen Friendica-Nutzern ziemlich genau niemand gebraucht.
Friendica auch nur im Browser eines Smartphone zu verwenden, war weder Notwendigkeit noch irgendwie Standard, sondern im Grunde eher technische Demonstration. Guckt mal, das geht. Die richtig Wagemutigen versuchten, Friendica ber StatusNet-Apps zu nutzen. Das scheiterte natrlich oft daran, da die Mehrzahl der StatusNet-Apps auf Identi.ca hartgecodet war, das so gro war, da der Mehrnutzen, wenn die Instanz auswhlbar war, zu gering war. Aber auch das war eher Experiment als dringender Notwendigkeit geschuldet.
Selbst wenn mal ber eine Friendica-App nachgedacht wurde, und hier reden wir schon von frhestens ca. 2012, wo Friendica seinen endgltigen Namen hatte und von der Community bernommen worden war, dann war der Gedanke eher: "Ist vielleicht mal ganz spannend", aber definitiv nicht: "Brauchen wir unbedingt ganz dringend, weil fr die meisten Leute Friendica ohne dedizierte Smartphone-App nicht benutzbar ist."
Weil Friendica nie beworben wurde, war es immer weitestgehend unbekannt. Auch, nachdem das 2010 so hochgehypete diaspora* in der Obskuritt versunken war, war Friendica unbekannter. Die Facebook-Alternative war Google+. Pest und Cholera, aber wenigstens war es nicht Facebook, und man kannte es. Google+ zeigte, wie unbekannt diaspora* inzwischen war. Es war nmlich von A bis Z ein zentralisierter diaspora*-Klon, angefangen bei den Google+-Kreisen, die bei diaspora*s Aspekten abgekupfert waren (die Mistpark vorher schon hatte), aufgehrt damit, da Google bei all seinen Diensten einen Klon der diaspora*-UI einfhrte. Und niemand auerhalb von diaspora* und Friendica hat gemerkt, da das alles von diaspora* geklaut war.
So, und weil Friendica so unbekannt war, wuchs es kaum. Und damit kamen auch keine neuen fhigen Entwickler in die Friendica-Szene, die vielleicht dedizierte Friendica-Apps htten machen knnen. Apps, die unter den damals bestehenden Friendica-Nutzern wohl eh nicht so notwendig gewesen wren. Erschwerend kam ab 2015 der schleichende Exodus nach Hubzilla dazu.
Gibt es auer Friendica/Diaspora/GNU Social usw. noch ein Soziales Netzwerk, das auf Smartphone Apps verzichtet

Im nichtkommerziellen, dezentralen Bereich hat eigentlich nur Mastodon eine gute Abdeckung mit Smartphone-Apps und definitiv nur Smartphone eine offizielle App mit demselben Namen.
Pleroma, Akkoma, Misskey, Firefish, Iceshrimp, Sharkey, CherryPick, Catodon usw. usf. werden zumeist ber Mastodon-Apps benutzt mit entsprechenden Einschrnkungen. Da gibt es meines Wissens keine speziell jeweils dafr ausgelegten Apps, was bei dem Wust an Forkeys auch Wahnsinn wre.
Hubzilla, (streams) und Forte sind alle von vornherein fr den Einsatz als PWA ausgelegt. Ende der 2010er hatte mal jemand eine dedizierte Hubzilla-App fr Android entwickelt, Nomad. Die ist aber im Dezember 2019 das letzte Mal aktualisiert worden und soll auf so manchem neueren Smartphone nicht mehr funktionieren. Und auch Nomad ist im Grunde nur ein spezialisierter Browser mit eigenem Zugriff auf ein paar Funktionen von Hubzilla.
Zugegeben, eine Hubzilla-App, die alle, aber auch wirklich alle Features von Hubzilla auf eine eigene Mobiloberflche packt, wre ein absolutes Monster. Stell dir eine gute, freie, quelloffene E-Mail-App vor. So eine Hubzilla-App wre noch heftiger. Fr (streams) und Forte wre es nur unwesentlich besser.
#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #iOS #Android #Apps #FediverseApps #Friendica Das ist ein Smartphone-User-Ding.
Als iPhone-Nutzer ist man es gewohnt, fr jeden Schei eine dedizierte App aus dem Apple App Store zu laden. Als Android-Nutzer ist man es gewohnt, fr jeden Schei eine dedizierte App aus dem Google Play Store zu installieren.
Smartphone-Nutzer sind vielfach eben keine Computernerds, die noch an den Desktop-Browser gewhnt sind. Viele von denen haben schlicht und ergreifend berhaupt keinen Computer mehr, weil sie alles, wofr sie einen Computer bruchten, mit dem Handy erledigen knnen. Vielleicht, wenn sie ein iPhone haben, haben sie zu Hause zustzlich ein iPad. Andere haben berhaupt noch nie einen Computer gehabt.
Wenn die $XYZ nutzen wollen, schnappen die sich ihr jeweiliges Mobilgert, machen den jeweiligen offiziellen App Store auf, installieren $XYZ und legen sofort los. Noch ein Grund brigens, warum Mastodon so populr ist: Mastodon kannst du als App auf ein iPhone laden. Pleroma nicht, weil es keine iOS-App namens "Pleroma" gibt. Friendica auch nicht, weil es keine iOS-App namens "Friendica" gibt.
Guck dir doch mal einschlgige Berichte bers Fediverse an. Da wird Mastodon nicht als Projekt oder so bezeichnet, sondern als "App". Aber nicht "App" = "server application", sondern "App" = Endnutzer-App, die man sich auf dem Smartphone installiert. Mastodon ist allzuvorderst eine Handy-App, wo irgendein komischer Hokuspokus im Hintergrund passiert. Doppelt komisch, weil man im Gegensatz zu Twitter und Facebook und Instagram und TikTok usw. sich eine "Webseite" aussuchen mu, wo man sich registriert, statt da es nur genau eine gibt.
Auch deswegen wird das Fediverse gern auf Mastodon reduziert: Mastodon ist so ziemlich das einzige Fediverse-Projekt, das auch eine Handy-App ist. Pleroma, Akkoma, Misskey, Friendica, Hubzilla usw. kann man dem technisch wenig versierten Massenpublikum nicht als etwas verkaufen, was sie verstehen, nmlich eine Handy-App, weil sie keine Handy-Apps sind, weil sie keine gleichnamigen offiziellen Mobil-Apps haben. Also redet man gar nicht drber.
Dazu kommen noch zwei Punkte. Zum einen hat gerade iOS das Problem, da der vorinstallierte Browser, Safari, einfach kacke ist und den iOS-Nutzern die Lust aufs Benutzen von Webbrowsern insgesamt versaut. Vor allem, wenn sie nie einen anderen Browser gesehen haben, weil sie in ihrem Leben noch nie ein Endgert bedient haben, auf dem nicht iOS luft.
Zum anderen haben dedizierte Mobil-Apps den Vorteil, die Funktionalitt eines wie auch immer gearteten Online-Dienstes im Standard-UI-Stil des Gerts umzusetzen. Und nicht irgendwie. Es wirkt wie aus einem Gu und ist von vornherein nicht nur fr Mobilgerte optimiert, sondern fr Mobilgerte ausgelegt. Jemanden, der nur Smartphones und vielleicht noch Tablets kennt, kannst du mit einer UI, die fr Desktop-Webbrowser designt wurde, nicht hinterm Ofen hervorlocken.
CC:
#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #Fediverse #iOS #Android #Apps #FediverseApps Das ist eben bei vielen "Fediverse"-Apps das Problem: "Fediverse" meint Mastodon, und die Apps sind hart nur gegen Mastodon gebaut. Das heit, die untersttzen zumindest einen Groteil der Features von Mastodon. Was aber andere Projekte mit Mastodon-API an Features haben, die Mastodon nicht hat, untersttzen sie nicht. Entweder lohnt sich das nicht, weil "keiner" irgendwas nutzt, was nicht Mastodon ist. Oder der Entwickler hat erst erfahren, da das Fediverse nicht nur Mastodon ist, als die App quasi schon fertig war.
Wer mit einer dedizierten App Friendica bedienen will ber das hinaus, was auch Mastodon kann, und sei es Textformatierung, hat nur zwei Mglichkeiten. Entweder Fedilab, das es nur fr Android gibt, dem auch einige Friendica-Features fehlen. Oder versuchen, sich als Tester fr das noch sehr unfertige Relatica zu registrieren. Unter iOS geht, wenn man sich nicht auf die Funktionalitt von Mastodon beschrnken will, nur entweder letzteres oder Webinterface im Browser.
#Long #LongPost #CWLong #CWLongPost #LangerPost #CWLangerPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #iOS #iOSApp #Friendica






What Makes a Will Legal in Canada?