Find the latitude of any place.  

Generated object types (Article vs Note):

I've made a few pictures in a virtual place modelled after Pioneer Courthouse Square in Portland, Oregon.
But I probably won't share them with the Fediverse because I'm not too keen on spending half a day or even the whole day again on an image description. An image description that nobody will read anyway because it'll be well over 10,000 characters long again. And I'm likely underestimating how long it'll actually end up.
Also, there's eye contact in them because my avatar is in them, and I've got no way to hide pictures from Mastodon users except by linking to them rather than adding them to a picture.
#OpenSim #OpenSimulator #Metaverse #VirtualWorlds #ImageDescription #ImageDescriptions #EyeContact #Inclusion #Inclusivity #Accessibility #A11y #Long #LongPost #CWLong #CWLongPost I'd go even further. Forcing AI-generated image descriptions upon all pictures and only allowing edits after posting is still bad. Not only unethical, but it'll allow for bad image descriptions to be posted to the Fediverse in the first place.
Image descriptions by AI must be optional. Opt-out maybe, but optional. Some people, myself included, post pictures that they themselves are vastly better at describing than any AI out there because they've got the necessary niche knowledge, and they have means of looking up details that an AI doesn't have and probably never will. So a mandatory AI-generated image description will always be inferior to a human-written one.
Fortunately, it's unlikely for something like this to be introduced to all instances of a major Fediverse project. It's even more unlikely to be introduced to all instances of all projects. I can name at least two projects that'll never introduce mandatory AI image descriptions.
#Long #LongPost #CWLong #CWLongPost #ImageDescription #ImageDescriptions #AI #OCR I always provide full, verbatim, word-by-word, character-by-character transcripts of any and all pieces of text in my pictures. Even if they're too small to be readable or even visible as text. Even if they're partly or mostly obscured. Even if they're only partly within the borders of the picture. Actually especially then.
If it isn't in English, I always try to provide an English translation in addition, one that's as close to being literal as possible.
I never rely on an AI when describing images or transcribing text. I always do it all 100% by hand.
And I'm not posting screenshots from , newspaper scans or photographs of posters. I'm posting digital renderings from within 3-D virtual worlds that can have dozens of pieces of text in them.
If I can do that, then checking and correcting an OCR-generated transcript of a newspaper scan shouldn't be asking too much.
#Long #LongPost #CWLong #CWLongPost #AltText #ImageDescription #ImageDescriptions #Transcript #Transcripts #Transcription Okay, I have a theory

Generated object types (Article vs Note):


The blogging projects WriteFreely and Plume publish everything as Article objects.
Friendica publishes posts as Article objects if they have a title and as Note objects if they don't.
Hubzilla publishes everything as Note objects. This might have been decided in case Hubzilla's Articles app, basically its own built-in long-form blogging engine, should ever start federating its contents (it currently doesn't).

Acceptance of Article objects:


Mastodon handles Article objects by creating a link to them with the title written above. It was decided for Mastodon to act like this because Mastodon simply isn't compatible with blog posts. It couldn't handle text formatting before v4.0, it still can't handle picture embedding, nor can it handle more than four pictures. Besides, it'd irritate Mastodon users who aren't used to seeing stuff with over 500 characters in their timelines to find a full-sized blog post with several dozen times that length.
Friendica and Hubzilla can handle Article objects properly. They know all text formatting tricks in the book, they know image embedding, they know what a title is, all their available user interfaces fold long posts, and their users are used to posts of just about any length anyway.
#Long #LongPost #CWLong #CWLongPost #ActivityPub #Mastodon #Friendica #Hubzilla #WriteFreely
C) The attitude of some of the longtime users (yes, sorry). It does not help at all to tell people who are not happy with some of the aspects of the Fediverse to set up their own servers if they are not technically minded.

We got countless new Fedizens over the last 13 months who have only ever had iPhones as digital end-user devices and never laid their hands upon an actual computer in their lives. Now they find themselves in a space that was almost exclusively inhabited by Linux geeks until thousands of otaku and furries were chased out of Twitter.
It also does not help to expect that all of the newcomers will play by their book.

Yes, it still seems to occur that someone who has been on Friendica or Hubzilla in the mid-2010s or earlier has their first encounter with a Mastodon user. And just like the Mastodon user is irritated that there's something in the Fediverse that isn't Mastodon, that long-time user is irritated how Mastodon can possibly be so different from Mike Macgirvin's creations, both technologically and culture-wise.
Sometimes both sides want the respective other side to adapt, justified by "We were here first" vs "We're the majority". It's so very ironic how this pops up around Thanksgiving.
#Long #LongPost #CWLong #CWLongPost #FediverseMeta #FediMeta #CWFediverseMeta #CWFediMetaI'm tempted to add the following preamble to all my posts and comments. This would answer a lot of questions right away and maybe leave nothing unclear.

Preamble


IMPORTANT: I am not on Mastodon. I am on . Hubzilla is not a Mastodon instance. Hubzilla is a wholly different project in the . . The Fediverse has never been only Mastodon. And Hubzilla is not a fork of Mastodon either.
Mastodon was not even here first. The first was StatusNet from 2008 which later became . Then came in 2010. And Hubzilla is from 2015 and based on a project from 2012. Mastodon was launched in 2016. It was immediately federated with all three because it spoke a language that these three spoke, too. Except for GNU social, all are still federated with one another to this day. And this is normal, legal and fully intentional.
We aren't the intruders. Technically speaking, Mastodon is the intruder.
Hubzilla has features that Mastodon doesn't have. Hubzilla has an unlimited character count where Mastodon has a limit of 500 characters. Hubzilla can create various ways of text formatting such as bold type, italics, code blocks, lists or headlines Mastodon can display them now, but it can't create them. Hubzilla can embed hyperlinks Mastodon toots only support URLs in plain sight. Hubzilla can do both quotes and "quote-tweets", and it doesn't have to resort to screenshots for either. And Hubzilla always mentions users by their full name, not by their short name like Mastodon.
When Mastodon was launched with its self-imposed restrictions, Hubzilla had had all these features for almost four years already. Friendica had had the same features for almost six years already. And many of these features, including longer posts and text formatting, have been available on all other Fediverse projects since they were created.
Thus, I will use features that are not available on Mastodon even though Mastodon users could read this. I will not refrain from writing over 500 characters in one post, and I will not refrain from using text formatting just because Mastodon users aren't used to that.
We were here first. And we won't limit ourselves just because you're limited by the software you're using. We will continue to make use of features that you don't have. Deal with it.

Unfortunately, the target audience of this, Mastodon users, won't read it because it's too long. Not to mention that it'd require a boatload of content warnings and filter-triggering hashtags see below.
#Fediverse #Mastodon #Friendica #Hubzilla #MastodonIsNotTheFediverse #NotOnlyMastodon #CharacterLimit #CharacterLimits #500Characters #TextFormatting #Quotes #QuoteTweets #QuoteToots #Long #LongPost #LongPosts #CWLong #CWLongPost #Meta #CWMeta #FediMeta #FediverseMeta #Fedisplaining #CWFedisplaining and casting This is kind of ironic. What Mastodon uses for content warnings is actually the summary field. At least Friendica (6 years older than Mastodon), Hubzilla (4 years older than Mastodon) and (streams) actually use that field for summaries. All three have no character limits whatsoever, so it's even justified to have such a field.
Using content warnings on Mastodon to hide graphic images is rather useless itself or has been for the longest time because the summary/content warning doesn't actually hide images on Mastodon. It does so on other projects, by the way, including the three I've just mentioned.
I've only seen actual "read more" implemented on Write Freely which is a Medium "clone" in the Fediverse. And even that requires a bit of HTML to be put into the text. Nothing your average Mastodon user would do.
And in my opinion, content warnings should happen on the reader's side by using configurable filters. Unfortunately, while Mastodon has had the technical means for that since v4.0, it fails for a whole list of reasons.
#ContentWarning #ContentWarnings #Mastodon #Long #LongPost #CWLong #CWLongPost As far as I know, the main reason for this is because Mastodon is the only Fediverse project that does not use the standard ActivityPub way of flagging images sensitive. It has its own, "proprietary", non-standard "hack" for this purpose. And it completely ignores the standard flag which is used by many other projects, Pixelfed included.
The only way for non-Mastodon Fediverse projects to render images invisible on Mastodon is to figure out how Mastodon's own hack works and then implement a way to generate that flag and federate it to Mastodon. Only (streams) does so currently AFAIK, and even (streams) does that with two particular hashtags rather than translating its own standard-compliant sensitive flag for images.
This is one of the many ways of Mastodon trying to force its way upon the whole Fediverse.
Also, there's the link between content warning and obscured images. Or there isn't. At least vanilla Mastodon used to not have one. You had to flag your images sensitive manually because content warnings didn't do so automatically. There has been talk about introducing such a feature, but I don't know whether it has been implemented meanwhile.
But even if it has, this feature only works on vanilla Mastodon with toots from vanilla Mastodon. It does not work on anything that has to be imported, be it from Glitch or Akkoma or Firefish or Pixelfed or Hubzilla or whatever.
Let's put it this way: Here on Hubzilla, I can write a post with a picture in such a way that I can have other Hubzilla or (streams) users click three times until they can see the picture. The summary which is the content warning on Mastodon, then a spoiler tag, then the blurred-out image itself. Four times if they have the NSFW app active, a simple, automatic, keyword-based content warning system.
The same picture in the same post will be plainly visible for absolutely everyone on Mastodon right away. And I can't do anything about that.
This, in conjunction with the chance of autistic users stumbling upon my posts in their federated timelines, is why I don't post pictures with at least halfway visible faces anymore.
#ContentWarning #ContentWarnings #Long #LongPost #CWLong #CWLongPost The primary topic of my channel is a very visual medium: 3-D virtual worlds. It shouldn't be interesting for blind or visually-impaired people at all. And as far as I know, I don't have a single connection (= follower) who is not sighted and still interested in that kind of stuff.
Nonetheless, I put a huge effort into writing image descriptions. Such a huge effort actually that I don't post pictures more than once or maybe twice a month, if at all.
I do that for people who may happen upon my posts on some federated timeline out there. Blind or visually-impaired users who, for some reason, may actually be curious about virtual worlds. Sighted users who are, but who understandably need a whole lot of description and explanation to get my pictures. Blind or visually-impaired users who may not be interested in virtual worlds per se, but who still want to know what my pictures look like and what the stuff in my pictures looks like. And lastly the self-appointed Fediverse police who, I guess, even complain about pictures with no or insufficient description when they find them on their federated timelines.
#AltText #ImageDescription #ImageDescriptions #Long #LongPost #CWLong #CWLongPost Such a high limit would only be of use for other Pixelfed users. If the Pixelfed GUI is capable of displaying alt-text in such a way that it can be scrolled, that is.
At least Mastodon, Misskey and all Misskey forks including Firefish have a hard cap at 1,500 characters. Longer alt-text is not folded in, it's cut off. Characters beyond that limit are discarded altogether. So if you write an alt-text that's over 1,500 characters, users of these projects will never be able to read all of it.
Friendica, Hubzilla and (streams) don't have any character limits at all. Not for posts, not for alt-text either because alt-text is grafted into the image embedding markup code, so it's part of the post, technically speaking. However, even on a standard desktop display with no zoom and no enlarged characters, you'd be hard-pressed to display more than ca. 3,800 characters because the alt-text pop-ups can't be scrolled.
Last but not least, overly long alt-text is not good for the actual primary target audience of alt-texts, screen reader users. Unlike a post text, alt-text can't be navigated. You can't rewind to a specific point in an alt-text using a screen reader. If you have to re-read something, you have to go back to the very beginning of the alt-text and have your screen reader rattle the whole alt-text down again.
If anything, Pixelfed should raise the post limit. And the Fediverse as a whole should free itself from the Mastodon-induced "image description = alt-text = image description, full stop, no exception" mindset. For information that's exclusively available in the image description does not belong into alt-text.
#AltText #ImageDescription #ImageDescriptions #Pixelfed #CharacterLimit #CharacterLimits #Long #LongPost #CWLong #CWLongPost If GPT was actually good, it would not have to resort to "seems to be", "could be", "might be", "suggests" etc. Instead, it could a) identify with 100% certainty which game or virtual world this image is from, b) judge whether those who are likely to come across the image are familiar with that game or virtual world, c) describe and explain it if not and d) even identify where exactly in that game or virtual world the image was made. Also, it could identify every last object in the image with 100% certainty and know whether or not it's necessary to explain the object to the audience.
This is one of the reasons why I'll never resort to an AI for image descriptions and always write them myself. I take all my pictures in 3-D virtual worlds, very obscure ones even. I know what it's like to describe such pictures. And I know that I'll always have the advantage of being able to look at things in-world rather than only in the image, something that no AI will be capable of anytime soon.
#AltText #ImageDescription #ImageDescriptions #AI #GPT #ChatGPT #VirtualWorlds #Metaverse #Long #LongPost #CWLong #CWLongPost Exactly this is why I've only ever written an alt-text for someone else's picture once.
I prefer writing very detailed, descriptive, informative, explanatory image descriptions. I can only do that if I actually know inside-out what's in the picture, and if I can research what I don't know from the top of my head.
I think I'm best at describing my own pictures from within 3-D virtual worlds which really require very detailed descriptions. And I'm even that only when I'm in that very virtual place while I'm describing the picture. That way, I can walk and cam and zoom around all I want. This lets me see even tiny details better than just looking at the picture. For example, if a sign in-world is shown as a blot of 4x3 pixels, I can and actually do transcribe the three lines of text on it verbatim and sometimes even identify the typeface.
If I have to describe a real-life photograph taken by someone else, I can't travel to where the picture was taken, walk around that place with a running laptop and spend hours researching, describing and explaining.
That one exception showed something I partly knew rather well and partly could research, and besides, the image was heavily edited anyway.
#AltText #ImageDescription #ImageDescriptions #Alt4U #Alt4You #AltText4U #AltText4You #Long #LongPost #CWLong #CWLongPost

Nova Scotia revises long-term care expansion adding another 2,200 rooms by 2032
The Nova Scotia government is adding or replacing 2,200 rooms to the province's long-term care network by 2032 to meet the demands of an aging population.
-termCare -termCare

Nova Scotia revises long-term care expansion adding another 2,200 rooms by 2032
The Nova Scotia government is adding or replacing 2,200 rooms to the province's long-term care network by 2032 to meet the demands of an aging population.
-termCare -termCare

There are three sorts of alt-text/image description guides on the Internet. None of them is really that useful for the Fediverse outside of Mastodon.
The first sort is the most common one. It explains how to graft alt-text into HTML code and either how that's useful for SEO or not to use alt-text for SEO.
However, this is not applicable to social media at all. Maybe except for (streams) which is the only Fediverse project that allows HTML in posts, but probably not even there. And it implies an unlimited amount of characters around the posted image. Bonus points if the guide is so old and outdated that it still recommends longdesc for anything over 150 characters.
The second sort is about how to add alt-text to pictures on social media.
However, "social media" refers exclusively to the tradition commercial walled-garden platforms owned by Silicon Valley corporations. Twitter/, Facebook, Instagram, LinkedIn.
The third sort is very rare already. It might explicitly about Mastodon. But it might just as well say it's about "Mastodon and the Fediverse".
However, even "Mastodon and the Fediverse" almost always only means Mastodon. And in this case, it actually only means Mastodon. It might mention that "the Fediverse" is so great because you can put the absolutely incredulous amount of 1,500 characters into alt-text, but only 500 into a post. And it provides step-by-step instructions on how to add an alt-text on Mastodon's default Web interface.
Not only does it lack any instructions for alternate Web interfaces and any mobile app, but it also completely ignores everything that isn't vanilla Mastodon. I'm not only talking about step-by-step instructions for other Fediverse projects.
No, a good guide with "Fediverse" in the title would mention that absolutely everything that isn't vanilla Mastodon offers more characters in the post than Mastodon offers in the alt-text. And while it might be tempting to shove a detailed, over-1000-characters image description into the alt-text on Mastodon because you don't have any other choice, doing that makes absolutely no sense literally anywhere else in the Fediverse.
A good guide with "Fediverse" in the title would do as the first sort of alt-text guide does: explain that information that isn't available anywhere else in the post, not in the post text and not in the image itself either, does not belong into the alt-text. Not only is long alt-text hard to navigate with a screen reader, but there are people who can't access alt-text, and be it due to a physical disability. Anything written only in alt-text is lost to them.
But if you're anywhere that isn't Mastodon, there is no reason for you to put such stuff into the alt-text. At least not Mastodon's reason which is its utter lack of character capacity in toots. Everything else does not have that lack in the first place.
Misskey offers you 3,000 characters in posts. That's twice as much as in Mastodon's or Misskey's own alt-text, and that's a whopping six times as much as in Mastodon's toots. Firefish offers you from 3,000 characters upward. Pleroma and Akkoma offer you from 5,000 characters upward. Friendica, Hubzilla and (streams) do not even have any character limits by default.
A good guide would know that and tell you that.
But such guides don't exist.
#AltText #ImageDescription #ImageDescriptions #Fediverse #Mastodon #Pleroma #Akkoma #Misskey #Firefish #Friendica #Hubzilla #Streams #CharacterLimit #CharacterLimits #Long #LongPost #CWLong #CWLongPost

The treatment for sickle cell disease and beta-thalassemia is the first to be licensed using the gene-editing tool Crispr, which earned its discoverers the Nobel Prize in 2020

The authorities have now decided to adopt multi-pronged approach to rescue the trapped workers which includes Vertical Drilling, Micro tunnelling and drilling borehole
'khandTunnel

Bluesky is only so attractive because it promises to be "literally Twitter without Musk".
But the very moment it becomes even only semi-decentralised, the very moment at least one more instance is launched, it stops being that "literally Twitter without Musk" dreamland and becomes even more complicated than Mastodon. That is, unless the official Bluesky iOS and Android app starts railroading newbies hard to one specific instance just like Mastodon's official app does.
And it will go decentralised. It seems to be hard enough to stow away invited users on that one existing instance already now. Bluesky won't open registrations to anyone with only this one instance. So when registrations are opened, new users will have to choose an instance.
Then it's only either Threads or back to .
#Long #LongPost #CWLong #CWLongPost Because there are still more people on Mastodon, especially on the big general-purpose instances, who think the Fediverse is only Mastodon. Or maybe Mastodon + Pixelfed + PeerTube at most.
Of those who don't, I guess not exactly few think that everything that isn't Mastodon is just Mastodon with a different UI and a different name because they can't for the lives of them imagine that a Fediverse project could possibly have features that Mastodon doesn't.
And then there are the Mastodon fanbois and fangurls who are fully convinced that Mastodon is the best there is in the entire Fediverse. I myself have been told by someone that Mastodon was the only Fediverse project that's "feature-complete". So moving elsewhere than Mastodon is considered moving somewhere worse.
Not to mention that those who love Mastodon over everything else don't want to see a single user leave Mastodon, regardless of whether they can stay in contact with that user or not.
#Fediverse #Mastodon #MastodonIsNotTheFediverse #Long #LongPost #CWLong #CWLongPost

Our lasts . till the last.

More staff needed at B.C. care homes, seniors advocate reports
A survey of thousands of seniors in long-term care homes in B.C. recommends the provincial government increases staff, improves food and boosts access to community services.
-termCare

Add your location to a Google Map

More staff needed at B.C. care homes, seniors advocate reports
A survey of thousands of seniors in long-term care homes in B.C. recommends the provincial government increases staff, improves food and boosts access to community services.
-termCare

Global News BC: More staff needed at B.C. care homes, seniors advocate reports 'Advocate -termCare

2 in over on for .

and on Wednesday launched two separate at the against their respective , which had them from having before they earlier this year.

S'apre resilincia pr'amor que "non i a ua auta opcion que tirar endauant", re massa .

2297 x 3508 Bisque Doll
room girls display art Marin at viewer art piercing nails hair uniform eyes wallpaper

.

--

TLDR: UPS .

--

, , , .

, .
, - , - , , - .

, , - - DRM .

, .
- 4 75 Ah, 1000 72, 96 192 , 48.

- . Anderson Power APC. , - - . , . , , - , - .

, - " ". " " - . , . - , . , .
, . , , . .
.
- -. , , . -, , , .

- , - .

- .

, , .

- - , . 8 16 , .
, , - , . .

.
, ( ) , .

, , ( , ), ( / - ) .

, "" :

, , ( , ) , .
, . , , Linux - Java . SNMP APCUPSD - Home Assistant .

"" ( , ), Ippon, Systeme Electric, CyberPower, Powercom, nJoy, - .

It's a pity that so many sim owners apparently can't be bothered to ever test what they have on their sims. And I'm not even talking about fixing what's obviously broken.
Some two years ago, I've already mentioned . This can range from footpaths that are unusuable because something is blocking them to teleporters that are necessary but non-functional or misconfigured.
In general, anything scripted may be failure-prone. It can be because the scripts need to be reset but weren't. It can be because the scripts are hopelessly outdated, especially if objects with stone-old scripts from the 2000s end up on a sim running YEngine. Or it can be because a script is written in OSSL, but OSSL is deactivated on the sim.
Or, better yet, objects that ought to be scripted, that everyone would expected to be scripted, but that aren't scripted for some reason. And be it because the sim builder has unknowingly picked up an empty copy.
All issues that could have been discovered and ideally fixed if the sim owner had taken some time and effort to test them. Instead, they just drop stuff on their land, and they're done with it. Or they load an entire OAR, blindly assuming it works perfectly out of the box.
Experience from travelling around the Hypergrid should bring with it the realisation that not exactly few OARs are buggy, ranging from Linda Kellie sales boxes of which no intact copies exist anymore to the bathroom door in her Freebie Mall 2.0 which I've only ever seen fixed once to Clarice Alelaria's Avataria sim in which the majority of sales boxes have mangled permissions and can't be acquired. And I'm not even talking about dropping old OARs that heavily rely on advanced scripts, made in times when there was only XEngine, onto a sim running YEngine.
The common lack of feedback doesn't help. Some visitors prefer to downrate entire sims on OpenSimWorld because scripted stuff doesn't work over letting the sim owners know so that they could go and fix the issue. But sadly, some sim owners take each bug report concerning any of their sims as a personal attack, or at least they completely ignore bug reports. No wonder nobody ever reports bugs.
#OpenSim #OpenSimulator #Metaverse #VirtualWorlds #PetPeeve #SimBuilding #Long #LongPost #CWLong #CWLongPost And that's why it's impossible to always cater to everyone's needs 100%.
If you have a complex, detailed image, you have to choose.
Either you write a very short image description to cater to those with a short attention span, but then you leave unclear to blind or visually-impaired users what all that stuff in your image looks like.
Or you write a full detailed description so that blind and visually-impaired users have an idea what everything looks like, but those with a short attention span can't follow your description anymore.
Or you quit posting complex images that don't focus on only one element altogether.
I know all this from personal experience.
#AltText #ImageDescription #ImageDescriptions #Accessibility #A11y #Inclusion #Inclusivity #Long #LongPost #CWLong #CWLongPost

Woman at work using a napkin to open a cabinet. Man on subway using a tissue to hold on to the pole. Neighbor waiting at the front door so I can open the doors for him, won't touch doorknobs. All of the people in these situations were not wearing masks. It is logical for people to risk their literal lives, but not want to get "cooties" It's nonsense.

Have you tried this yourself And with "description", do you mean whatever alt-text the images on KnowYourMeme might have or the actual meme description and explanation
If the former: On KnowYourMeme, alt-texts are either missing or not nearly on the Fediverse's standard. They don't describe the image, nor do they explain the meme.
If the latter: usually impossible for most Fediverse users including everyone on Mastodon. Meme descriptions can grow way beyond 1,500 characters. You can't put them into alt-text on Mastodon, you can't put them into the post text body on Mastodon.
For example, has a description of almost 3,500 characters. And that doesn't include any markup, the notable examples and the external references.
Besides, it's illegal either way. All original content on KnowYourMeme is copyrighted. You need prior written consent to copy anything from that website. Otherwise I would have given you a demonstration on what'd happen if someone actually followed this advice. Two actually: one with the full meme description in the alt-text, one with the full meme description including all links and text formatting intact in the post text body.
My advice would be to mention the name of the meme and link to the respective KnowYourMeme page, both in the post text body and not in the alt-text. The alt-text would include a short description of the image and a full verbatim transcription of the meme text(s).
That is, unless someone replies and demands full detailed descriptions of all meme pictures along with the KnowYourMeme link.
#AltText #ImageDescription #ImageDescriptions #Meme #Memes #Long #LongPost #CWLong #CWLongPostOkay, since I won't be able to have pictures blurred out on Mastodon, here's a new approach: linking them instead of embedding them.
Anyway, this is what happens when the DJs at your virtual events are too good: They turn you into a music snob with high expectations.
which I hope Mastodon will hide behind the content warning. If I had embedded it like I usually do, Mastodon would put it into plain sight for everyone, including people easily triggered by eye contact, regardless of the content warning.
It is based on the meme.
This time, what Boromir says is, "One does not simply get away with playing the single versions of Purple Rain and Nights In White Satin to us." If anyone who is not fully sighted wants to know more details about the image, ask. I'm trying to slim down the image description this time, also since the image is not actually part of this post.
It refers to the DJ events I regularly attend in virtual worlds. Our local DJs usually don't play single versions or radio edits. They prefer album versions, no matter how long they are, whether it's "Purple Rain" by Prince & The Revolution (8:41), "Nights In White Satin" (7:22) or even "In-A-Gadda-Da-Vida" by Iron Butterfly (17:03) or "Tarkus" by Emerson, Lake & Palmer (20:41). So when we're elsewhere, and the DJ plays a shortened single or radio version of one of these (except "Tarkus"), we're easily disappointed.
#OpenSim #OpenSimulator #Metaverse #VirtualWorlds #Meme #OneDoesNotSimply #OldMemesNeverDie #Long #LongPost #CW #CWLong #EyeContact #CWEyeContact

AWS EKS 14 26

RSS feed X (Twitter) qrtt1 ingramchen AWS EKS 14 ( k8s )


There's no rule that text which cannot be read has to be included in alt text.

There's no rule on whether or not to transcribe unreadable text at all. So I prefer to err on the "too much" side.
(That rule is for short text which is readable.)

And yet, I've seen Mastodon users out of all people try to transcribe what amounts to an entire newspaper article verbatim. The trick they use is to upload the same picture with text four times and spread the transcription across four alt-texts.
I encourage you to engage with best practices for alt text and read what blind folks and people with low vision prefer from alt text.

I've read a lot of articles about best practices for alt-text. Most of them weren't applicable because they were geared towards statig webpages and not social media posts.
Those that were specifically for social media were for Facebook, Twitter and LinkedIn, all places that don't have nearly the possibilities of the Fediverse or its attitude towards accessibility.
Those that were for the Fediverse were only for Mastodon and written for vanilla Mastodon's still rather low character limits. They were rather usable, and I actually wrote typical Mastodon-style alt-text at first.
But then I stumbled upon posts on Mastodon in which sighted people wrote that alt-text/image descriptions can help even people without any disabilities understand images that show things these people aren't familiar with. Allow me to "quote-tweet" an example including a link to the original:

Alt-text doesn't just mean accessibility in terms of low -vision or no-vision end users.
Done right also means accessibility for people who might not know much about your image's subject matter either.
This is especially true for technical topic photos. By accurately describing what's in the picture, you give context to non-technical viewers, or newbies, as to exactly what they're looking at, and even describe how it works or why it matters.
is not just an alternate description to a visual medium, it's an enhancement for everyone if you do it right.
(So I can't find any prior post of mine on this, so if I've actually made this point before, well, you got to hear a version of it again.)

And it dawned upon me that my pictures from virtual worlds consist of things people aren't familiar with. Down to the very locations where I take the pictures. Logically, this meant that I had to describe a whole lot more. Namely everything.
None of those best-practices websites covered my use-case even remotely. They assumed either graphs which they considered painfully difficult to describe or real-life photos that were to show one or a few certain elements. I had pictures from what's essentially an alien world, and I wanted to show them as a whole with no focus on any particular element. Literally everything in them would be likewise important within the context of the post. And at the same time, everything needed to be described because people would be unfamiliar with it.
I studied alt-text, image descriptions and accessibility in the Fediverse. I studied them a lot, and I tried to figure out the best solution.
Seven months ago, I decided to do a dry test run and see what'd come out if I actually did that. I picked which I had already used in with no description at all. Back then, I knew neither that image descriptions are essentially required nor how alt-text is added to images on Hubzilla.
If I were to do this, I decided, I'd have to do it properly. From looking at the picture at the resolution at which it was posted (800x533), I couldn't see enough to properly describe the image. At the original resolution (2100x1400), I couldn't see enough either. So since the original location where the picture was taken didn't exist anymore, I went to where the building was preserved. In-world, I could zoom and pan around as much as I wanted and enlarge everything as much as needed. I could transcribe text at just about any size.
Some three and a half hours later, I had an image description that was just under 11,000 characters. I wasn't really satisfied with what I had written because the large black sign behind the tree had not been preserved, so I couldn't transcribe it. A few days later, I found older pictures from the original location, including two that showed the large black sign at a high enough resolution that I could transcribe it. However, instead of just adding the missing transcription, I went back in-world to study more details that I had left out. Overall, I had invested well over five hours into over 13,200 characters.
Now, what was I supposed to do with so much text
Back then, I still considered image descriptions in the post horrible style because they ruined the looks of the post, especially long ones. I would have been perfectly able to put the description into the alt-text, but I already knew that Mastodon, Misskey and CalcKey would chop 11,700 characters off, never to be seen again on their instances. Putting the description into a comment or a separate article would both have been possible, but having it separate from the image wouldn't have been accessible.
So I how to handle this situation. that not everyone can access alt-text, and any information that is not available in the actual post text and apparent from the image must go into the post text, full stop.
I still wasn't convinced. So I did a test run with the image linked above and the 13,200-character image description in the shape of a five-post thread. I even showed it to the Fediverse community on Lemmy. I posted four new variants of the post from last year with an explanation before it and the same image.

I didn't get even a single reply. No likes, no dislikes. This was mostly because posting pictures from Hubzilla to Lemmy was and still is broken, but I wasn't aware of that. And Mastodon and the rest of the Fediverse did as if I hadn't even posted any of this.
Lastly, it was my own decision to sacrifice style for maximised accessibility, put the image description into the post itself and have the alt-text reference the post text body for description.
For as long as those 13,200 characters remained my personal record, I've only received positive feedback for this strategy and the level of detail and no negative feedback for the length. And even that positive feedback was very scarce, three times maybe, and it mostly came after I had shown specific people my image descriptions.
As of now, I hide posts with images behind a content warning because some people demand content warnings for long posts (= absolutely everything over 500 characters). I always mention how many characters the post has so that those who open the content warning won't be surprised. And I give a very short image description in which I don't actually describe anything in each alt-text for those who don't want to open the content warning just to have a monster post jump into their faces. To be on the safe side, I also add the hashtags #Long, #LongPost, #CWLong and #CWLongPost.
Ever since I've broken my 13,200-character record, you are actually the first user to give me any feedback whatsoever. And even yours isn't on one of my latest image posts specifically.
Yes, some people like more detail than others. But 40,000 words is horribly excessive, and seems like a wasted effort on your part. You're writing essays with additional context, history, and details not present in the image. That's fine to write as a separate thing, but that's not what alt text is, or even what a long description is for.

Actually, everything I describe is always at least partially within the borders of the image. The only time I've described things and transcribed texts that were fully covered up by something else was when I wrote the descriptions for the three space displays because I didn't pay attention to what was visible from the camera's point of view. Otherwise, I avoid describing elements that are fully hidden by something else as well as text on surfaces that face away from the on-looker.
To be honest, I prefer being criticised for writing much too much over being criticised for not describing or explaining enough so far. I don't want to be one of those "lazy" users who can't be bothered to describe an image properly.
The only thing I can do to make my image descriptions significantly shorter without risking to describe too little would be to do like I did in the post with the wild pigs or in (a bit over 3,000 characters, 45 minutes, praise from Mastodon in the comments): Adjust the camera angle and/or crop the picture in such a way that there is as little to describe as possible.

Cook et al. used experimental manipulations and longitudinal data to show that a -lived species becomes less with age. They also found that structures depend on structure.
Read now ahead of print!

Night Grooves McDade, Texas, USA

Rolleiflex 2.8 D Kodak Tri-X 400 2023

1978 Australian Horror Film LONG WEEKEND Novelization Stalks the Wild on December 5th
Encyclopocalypse Publications commitment to preserving cinematic history through novelizations continues with a December 5th release of Long Weekend. Based on Colin Egglestons 1978 film of the same name, Long Weekend is a cautionary telling of man vs....
weekend

As I hope you've already discovered, the Fediverse is nothing like Twitter/. There is no algorithm automatically forwarding your posts to those who might be interested in them. Also, even though Mastodon has introduced full-text search, it's only slowly picking up.
Instead, the key to post visibility and discoverability is using hashtags.
If you're using OpenSim, and you aren't sure what to tag, here are some recommendations. If I mention a tag explicitly, it most likely already exists.
First of all, of course, #OpenSimulator and #OpenSim. You may use either, but I recommend using both.
Then there's #Metaverse. The OpenSim community has used that word for some 15 years already. Besides, it helps demonstrate three things to those who search for it. One, "The Metaverse" isn't dead. Two, it wasn't invented by Zuckerberg either. Three, it can be done and has been done without a blockchain, a cryptocurrency and NFTs. (If you want to write about Zuck's worlds, use #Horizons or specifically #HorizonWorlds instead.)
I myself also use #VirtualWorlds, and so do some Second Life and OpenSim users.
If you write about a grid or mention it sufficiently, tag it. #OSgrid, #Kitely, #3rdRockGrid, #Groovyverse, #CraftWorld, #Bridgemere, #DorenasWorld, #Dereos, #ArtDestiny, #DiscoveryGrid, #BarefootDreamers, #Neverworld, #Astralia, #AviWorlds, #EtheriaGrid, #TheGridThatShallNotBeNamed (you know which one I mean) etc. This should include dead grids such as #Metropolis.
Sometimes things might get complicated, for example if there's a popular abbreviation or acronym for a grid. In that case, feel free to tag e.g. both #AlternateMetaverse and #AMV or both #GreatCanadianGrid and #GCG. It really only gets complicated with #WolfTerritories, #WolfTerritoriesGrid, #WTGrid and #WolfGrid, all four of which have been used by Lone Wolf himself.
In addition, if you actually write about a grid, you might want to use #Grid. Not so much for fleeting mentions.
Of course, #Hypergrid if you mention that.
Events usually come with their own official suggestions for social media hashtags. This year, for example, we've already had #OSG16B for OSgrid's 16th birthday, #OSFest2023 for OpenSimFest (#OpenSimFest or #OSFest) and #HIE2023 for the Hypergrid International Expo (#HypergridInternationalExpo, #HIE or #HIEOpenSim). The OpenSimulator Community Conference is always shortened to #OSCC, and the year is always two-digit, e.g. #OSCC23. And let's not forget #CornflakesWeek.
Other things worth tagging include OpenSim's own mesh bodies: #RuthAndRoth for the whole project, #Ruth2 and #Roth2. #RuthToo and #RothToo or even #RuthTooRC3 and #RothTooRC1. #LuvMyBod. #Max, #Maxine and #Maxwell.
Some more examples for OpenSim-related tags I've used before: #OAR, #HGSafari, #MadeInOpenSim, #ArcadiaAsylum, #LbsaPlaza, #WrightPlaza, #EventPlaza, #Clutterfly, #OpenSimWorld, #BakesOnMesh (feel free to yoink this one, Second Life users), #FirestormViewer (ditto), #PBR (ditto), #LSL (ditto) and #OSSL, #ZHAO (ditto) and #khAOs, #SFposer, #DreamGrid, #Sim, #SimBuilding (good for talking about how a sim is built)... You could also reference what I call #TheGreatGridDyingOf2022.
More generally, there are tags such as #Avatar/#Avatars, #VirtualArchitecture, #VirtualTravel and #VirtualClothing/#VirtualFashion. And if it's related to education, #VirtualWorldsEducation would be the tag.
#Hashtags #Long #LongPost #CWLong #CWLongPost






Metacognition Is Cool