Posts Tagged programming

Disinflections


I enjoy taking words that have irregular inflections, and inflecting other words the same way — for instance, saying *squoke as the past tense of squeak, analogous with speak and spoke, or even *squought, analogous with seek and sought. Sometimes those disinflections, as I’ve decided to call them, look or sound like other words… for instance, analogous with fly, flew, and flown, I could use crew and crown as past tenses of cry, or boo and bone as past tenses of buy. Indeed, analogous with buy and bought, the past tense of fly could be *flought, but then again, perhaps the present tense of bought could be ‘batch’ or ‘beak’, or ‘bite’, analogous with caught and catch, or sought and seek, or fought and fight.

The Disinflectant app

For a while now, I’ve wanted to make an app to find these automatically, and now that I have a bit of free time, I’ve made a prototype, mostly reusing code I wrote to generate the rhyme database for Rhyme Science. I’m calling the app Disinflectant for now. Here’s what it does:

  1. Read words from a file and group them by lemma.
    Words with the same lemma are usually related, though since this part is using text only, if two distinct lemmas are homographs (words with the same spelling but different meanings) such as bow🎀, bow🏹, bow🚢, and bow🙇🏻‍♀️, then they’re indistinguishable. This part is done using the Natural Language framework (henceforth referred to as ‘the lemmatiser’), so I didn’t write any complicated rules to do this.
  2. Find out the pronunciation of the word, as text representing phonemes.
    This is done using the text-to-speech framework, so again, nothing specific to Disinflectant. The pronunciation is given in phoneme symbols defined by the API, not IPA.
  3. Find all the different ways that words with the same lemma can be transformed into another by switching a prefix or suffix for another. For instance:
Transform typeTransformby analogy with
Spelling suffixy→ownfly→flown
Pronunciation suffixIYk→AOtseek→sought
Spelling prefixe→oeldest→oldest
Pronunciation prefix1AW→w1IYour→we’re

Most prefixes in English result in words with different lemmas, so Disinflectant didn’t find many prefix transforms, and the ones it found didn’t really correspond to any actual grammatical inflection. I had it prefer suffixes over prefixes, and only add a prefix transform if there is no suffix found, so that bus→buses would result in the spelling suffix transform ∅→es and not the prefix transform bu→buse.

Each transform can apply to multiple pairs of real words. I included a way to label each transform with something like ‘past tense’, so the app could ask, ‘why isn’t crew the past tense of cry?’ but didn’t end up filling in any of them, so it just calls them all inflections.

  1. Apply each transform individually to each word, and see whether the transformed version matches another word with a different lemma.
    It could just make up words such as ‘squoke’, but then there would be hundreds of millions of possibilities and they wouldn’t be very interesting to sift through, so it’s better to look for real words that match.

That’s it. Really just four steps of collecting and comparing data, with all the linguistic heavy lifting done by existing frameworks.

The limitations

Before I show you some of the results, here are some limitations:

  • So far I’ve only given it a word list, and not a text corpus. This means that any words which have different lemmas or different pronunciations depending on context (such as ‘moped’ in ‘she moped around’, with the lemma ‘mope’, vs. ‘she rode around on her moped’, with the lemma ‘moped’.) I have code to work with corpora to add homographs to rhyme.science, but I haven’t tried it in this app yet.
  • It’s only working with prefixes and suffixes. So it might think ‘woke’ should be the past tense of ‘weak’ (by analogy with ‘speak’ and ‘spoke’) but won’t generalise that to, say, ‘slope’ as the past tense of ‘sleep’ unless there is another word ending in a p sound to model it on. I could fairly easily have it look for infix transforms as well, but haven’t done so yet.
  • It doesn’t distinguish between lemmas which are spelled the same, as mentioned above.

The results

For my first full test run, I gave it the SCOWL 40 list, with 60523 words, and (after about a day and a half of processing on my mid-2014 MacBook Pro — it’s not particularly optimised yet) it found 157687 disinflections. The transform that applied to the most pairs of actually-related words was adding a ‘z’ sound to the end of a word, as for a plural or possessive noun or second-person present-tense verb ending in a voiced sound. This applies to 7471 pairs of examples. The SCOWL list I used includes possessives of a lot of words, so that probably inflates the count for this particular transform. It might be interesting to limit it to transforms with many real examples, or perhaps even more interesting to limit it to transforms with only one example.

I just had it log what it found, and when a transform applied to multiple pairs of words, pick a random pair to show for the ‘by analogy with’ part in parentheses. Here are some types of disinflections it found, roughly in order from least interesting to most interesting:

Words that actually are related, just not so much that they have the same lemma:

Some words are clearly derived from each other and maybe should have the same lemma; others just have related meanings and etymology.

  • Why isn’t shoppers (S1AApIXrz) with lemma shopper the inflection of shops (S1AAps) with lemma shop? (by analogy with lighter’s → light’s)
  • Why isn’t constraint (kIXnstr1EYnt) with constraint same the inflection of constrain (kIXnstr1EYn) with lemma constrain? (by analogy with shopped → shop)
  • Why isn’t diagnose (d1AYIXgn1OWs) with lemma diagnose the inflection of diagnosis (d1AYIXgn1OWsIXs) with lemma diagnosis? (by analogy with he → his)
  • Why isn’t sieves (s1IHvz) with lemma sieve the inflection of sift (s1IHft) with lemma sift? (by analogy with knives → knifed)
  • Why isn’t snort (sn1AOrt) with lemma snort the inflection of snored (sn1AOrd) with lemma snore? (by analogy with leapt → leaped)

Words that definitely should have had the same lemma, for the same reason the words in the analogy do:

These represent bugs in the lemmatiser.

  • Why isn’t patrolwoman’s (pIXtr1OWlwUHmIXnz) with lemma patrolwoman’s the inflection of patrolwomen (pIXtr1OWlwIHmIXn) with lemma patrolwomen? (by analogy with patrolman’s → patrolmen)
  • Why isn’t blacker (bl1AEkIXr) with lemma black the inflection of blacken (bl1AEkIXn) with lemma blacken? (by analogy with whiter → whiten)

Transforms formed from words which have the same lemma, but probably shouldn’t:

These also probably represent bugs in the lemmatiser.

  • Why isn’t car (k1AAr) with lemma car the inflection of air (1EHr) with lemma air? (by analogy with can’t → ain’t)
    Both ‘can’t’ and ‘ain’t’ are given the lemma ‘not’. I don’t think this is correct, but it’s possible I’m using the API incorrectly or I don’t understand lemmatisation.

Words that are related, but the lemmatiser was considering an unrelated homograph of one of the words, and the actual related word was not picked up because of the first limitation above:

  • Why isn’t skier’s (sk1IYIXrz) with lemma skier the inflection of skied (sk1IYd) with lemma sky? (by analogy with downer’s → downed)
    In this case, the text-to-speech read ‘skied’ as the past tense of ‘ski’, but the lemmatiser read it as the past participle of ‘sky’, as in, ‘blue-skied’, which I think is a slightly obscure choice, and might be considered a bug in the lemmatiser.
  • Why isn’t ground (gr1AWnd) with lemma ground the inflection of grinding (gr1AYndIHN) with lemma grind? (by analogy with rewound → rewinding)
    Here the lemmatiser is presumedly reading it as the noun or verb ‘ground’ rather than the past and past participle of ‘grind’.

Pronunciation transforms finding homophones of actual related words:

  • Why isn’t sheikhs (S1EYks) with lemma sheikh the inflection of shaking (S1EYkIHN) with lemma shake? (by analogy with outstrips → outstripping)
    ‘Sheikhs’ sounds just like ‘shakes’, which is indeed the present tense or plural of ‘shake’.
  • Why isn’t soled (s1OWld) with lemma sole the inflection of selling (s1EHlIHN) with lemma sell? (by analogy with sold → selling)
    ‘Soled’ sounds just like ‘sold’, which is indeed the past tense of ‘sell’.

Pronunciation transforms based on an incorrect pronunciation:

These represent bugs in the text-to-speech. Try them yourself on a Mac by setting the system voice to an older American English one such as Victoria, selecting the word, and choosing Speech→Start Speaking from the Edit menu or the contextual menu.

  • Why isn’t nape’s (n1AEpIYz) with lemma nape the inflection of nappy (n1AEpIY) with lemma nappy? (by analogy with suffocation’s → suffocation)
    The text-to-speech pronounces ‘nape’ correctly, but pronounces ‘napes’ like ‘naps’ and ‘nape’s’ like ‘nappies’.
  • Why isn’t mice (m1AYs) with lemma mouse the inflection of me (m1IY) with lemma I? (by analogy with modernity’s → modernity)
    The text-to-speech pronounces ‘modernity’ correctly, but pronounces ‘modernity’s’ like ‘modernitice’.
  • Why isn’t queue’s (ky1UWz) with lemma queue the inflection of cubing (ky1UWbIHN) with lemma cubing? (by analogy with lambs → lambing)
    The text-to-speech pronounces the ‘b’ in ‘lambing’. I’m not sure if there is an accent where this is the correct pronunciation, but it isn’t in the dictionaries I’ve checked.

Small transforms that can be applied to many other words:

Sometimes it will find that a word with the same lemma can have one letter or phonemes changed or added, and then there are a huge number of words that the transform can apply to. I wonder if you could almost change any final letter or phoneme to any other.

  • Why isn’t mine (m1AYn) with lemma I the inflection of mind (m1AYnd) with lemma mind? (by analogy with shoe → shod)
  • Why isn’t ham (h1AEm) with lemma ham the inflection of hay (h1EY) with lemma hay? (by analogy with them → they)
    This one could also be extended to hair (from them → their) to get a full set of weird pronouns.
  • Why isn’t hearth (h1AArT) with lemma hearth the inflection of heart (h1AArt) with lemma heart? (by analogy with sheikh → sheik)
  • Why isn’t captor (k1AEptIXr) with lemma captor the inflection of captain (k1AEptIXn) with lemma same? (by analogy with whiter → whiten)
  • Why isn’t colt (k1OWlt) with lemma colt the inflection of coal (k1OWl) with lemma coal? (by analogy with shopped → shop)

Spelling prefixes and suffixes that don’t quite correspond to how the inflections are formed:

Sometimes changes such as doubling the final consonant are made when an -ing or -ed is added. Since Disinflectant only sees this as a suffix being added, it thinks that specific consonant can also be added to words that end in other consonants.

  • Why isn’t braking (br1EYkIHN) with lemma brake the inflection of bra (br1AA) with lemma bra? (by analogy with picnicking → picnic)
  • Why isn’t garbs (g1AArbz) with lemma garbs the inflection of garbling (g1AArblIHN) with lemma garble? (by analogy with corrals → corralling)
  • Why isn’t badgering (b1AEJIXrIHN) with lemma badger the inflection of badge (b1AEJ) with lemma badge? (by analogy with transferring → transfer)
  • Why isn’t bobsled (b1AAbslEHd) with lemma bobsled the inflection of bobs (b1AAbz) with lemma bob? (by analogy with patrolled → patrol)

Disinflection I might have come up with myself:

  • Why isn’t hay (h1EY) with lemma hay the inflection of highs (h1AYz) with lemma high? (by analogy with lay → lies)
  • Why isn’t bowled (b1OWld) with lemma bowl the inflection of belling (b1EHlIHN) with lemma bell? (by analogy with sold → selling)
  • Why isn’t bodies (b1AAdIYz) with lemma body the inflection of bodice (b1AAdIXs) with lemma bodice? (by analogy with emphases → emphasis)
  • Why isn’t lease (l1IYs) with lemma lease the inflection of loosed (l1UWst) with lemma loose? (by analogy with geese → goosed)
  • Why isn’t wield (w1IYld) with lemma wield the inflection of welt (w1EHlt) with lemma welt? (by analogy with kneeled → knelt)
  • Why isn’t gauze (g1AOz) with lemma gauze the inflection of goo (g1UW) with lemma goo? (by analogy with draws → drew)
  • Why isn’t cheese (C1IYz) with lemma cheese the inflection of chosen (C1OWzIXn) with lemma choose? (by analogy with freeze → frozen)

Transforms based on abbreviations:

  • Why isn’t chuckle (C1UXkIXl) with lemma chuckle the inflection of chuck’s (C1UXks) with lemma chuck? (by analogy with mile → mi’s)
  • Why isn’t cooperative’s (kOW1AApIXrrIXtIHvz) with lemma cooperative the inflection of cooper (k1UWpIXr) with lemma cooper? (by analogy with negative’s → neg)
  • Why isn’t someday (s1UXmdEY) with lemma someday the inflection of some (s1UXm) with lemma some? (by analogy with Friday → Fri)

Other really weird stuff I’d never think of:

  • Why isn’t comedy (k1AAmIXdIY) with lemma comedy the inflection of comedown (k1UXmdAWn) with lemma comedown? (by analogy with fly → flown)
  • Why isn’t aisle (1AYl) with lemma aisle the inflection of meal (m1IYl) with lemma meal? (by analogy with I → me)
  • Why isn’t hand (h1AEnd) with lemma hand the inflection of hens (h1EHnz) with lemma hen? (by analogy with manned → men’s)
  • Why isn’t out (1AWt) with lemma same the inflection of wheat (w1IYt) with lemma same? (by analogy with our → we’re)

If people are interested, once I’ve fixed it up a bit I could either release the app, or import a bigger word list and some corpora, and then publish the whole output as a CSV file. Meanwhile, I’ll probably just tweet or blog about the disinflections I find interesting.

, , , , , ,

Leave a comment

Every iOS developer take-home coding challenge


I can load and parse your JSON.
I can download icons async.
I can show it in a TableView
just to show you that I’m able to.
I’ll go old school if you like it;
I can code it in UIKit.
I can code Objective-C,
if that’s what you expect of me.
You can catch { me } if you try;
I can code it SwiftUI.
I can code it with Combine:
receive(on: .main) and then assign.
I can read it with a Codable,
Local resource or downloadable.
I can code a search bar filter
or reload; I have the skill to!

I can code it every way
to go from model into view
But I have loads to do today
Can we just code things in an interview?

I’ve been looking for a new job lately, and I’ve found that about 80% of the take-home coding challenges I’ve been given amount to ‘Write an iOS app that reads the JSON from this URL or file, and displays it in a list, including the icons from the URLs in the JSON. There should be [some additional controls on the list and/or a detail screen shown when a list item is selected]. You may use [specific language and/or UI framework] but not [some other technology, and/or any external libraries].’

It’s time-consuming, and gets a bit boring after a while, especially when the requirements are just different enough that you can’t reuse much code from the previous challenges, but not different enough that you can learn something new. One company even had me do the whole thing twice, because they’d neglected to mention which UI technology they preferred the first time. Luckily, by then I had existing code for almost every combination, so I didn’t have to waste too much time on it.

This poem is meant to have a ‘Green Eggs and Ham‘ vibe, though I couldn’t come up with a good ‘Sam-I-Am’ part. The best I can do is:

I do not like this soul destroyer;
I do not like it, Sawyer-the-Employer!

or:

I do not like this coding prob’,
I do not like it, Bob-the-Job!

I did have a few take-home coding tests that were more interesting. One company had me implement a data structure I was not familiar with, so I got to learn about that. Another asked me to make specific changes (and any others that seemed necessary) in an existing codebase — a task much closer to what I’d likely be doing in an actual job.

Having also been on the hiring end of a JSON-to-TableView experience (it was not my choice of challenge, but I had no objection to it as I didn’t know how common it was at the time), I know how difficult it is to come up with ideas for such challenges, and I’m not sure what the solution is. I most enjoyed talking through problems in an interview, in pseudocode so there’s no pressure to remember the exact syntax without an IDE or documentation to help. This takes a clearly-defined amount of time, gives the interviewer a better idea of how I think, and gives me an idea of what it would be like to work with them. There’s also more immediate feedback, so I don’t waste time working on a detail they don’t care about, or just trying to convince myself that it’s good enough to submit. I realise that some people might find this more stressful than the take-home test, so ideally the companies would give the choice.

I am now at the point of my job search where I don’t think I’ll need to write any more JSON-to-TableView apps🤞🏻which is just as well, as I wouldn’t be inspired to do a great job of one.

, , , , ,

Leave a comment

Accessibility is for Everyone


Accessibility is for everyone. I say that whenever an abled person finds a way that an accessibility feature benefits them. But that’s not all that it means. There are really three different meanings to that phrase:

  • Accessibility exists to make things accessible to everyone.
  • At some point, everyone has some kind of impairment which accessibility can help them with.
  • Changes that make things more accessible can be useful, convenient, or just plain fun, even for people who are 100% unimpaired.

Is this article for everyone?

This is a bare-bones outline of ways accessibility is for everyone, with a few lists of examples from my personal experience, and not much prose. This topic is fractal, though, and like a Koch Snowflake, even its outline could extend to infinite length. I’ve linked to more in-depth references where I knew of them, but tried not to go too far into detail on how to make things accessible. There are much better references for that — let me know of the ones you like in the comments.

I am not everyone

Although I do face mobility challenges in the physical world, as a software developer, I know the most about accessibility as it applies to computers. Within that, I have most experience with text-to-speech, so a lot of the examples relate to that. I welcome comments on aspects I missed. I am not an expert on accessibility, but I’d like to be.

The accessibility challenges that affect me the most are:

  • A lack of fluency in the language of the country I live in
  • Being short (This sounds harmless, but I once burnt my finger slightly because my microwave is mounted above my line of sight.)
  • Cerebral palsy spastic diplegia

That last thing does not actually affect how I use computers very much, but it is the reason I’ve had experience with modern computers from a young age.

Accessibility makes things accessible to everyone

Accessibility is for everyone — it allows everyone to use or take part in something, not just people with a certain range of abilities. This is the real goal of accessibility, and this alone is enough to justify improving accessibility. The later points in this article might help to convince people to allocate resources to accessibility, but always keep this goal in mind.

Ideally, everyone should be able to use a product without asking for special accommodations. If not, there should be a plan to accommodate those who ask, when possible. At the very least, nobody should be made to feel like they’re being too demanding just for asking for the same level of access other people get by default. Accessibility is not a feature — lack of accessibility is a bug.

Don’t make people ask

If some people have to ask questions when others don’t, the product is already less accessible to them — even if you can provide everything they ask for. This applies in a few scenarios:

  • Asking for help to use the product (e.g. help getting into a building, or using a app)
  • Asking for help accessing the accessibility accommodations. For example, asking for the key for an elevator, or needing someone else to configure the accessibility settings in software. Apple does a great job of this by asking about accessibility needs, with the relevant options turned on, during installation of macOS.
  • Asking about the accommodations available to find out if something is accessible to them before wasting time, spoons, or money on it. Make this information publicly available, e.g. on the website of your venue or event, or in your app’s description. Here’s a guide on writing good accessibility information.

Asking takes time and effort, and it can be difficult and embarrassing, whether because someone has to ask many times a day, or because they don’t usually need help and don’t like acknowledging when they do. 

In software, ‘making people ask’ is making them set up accessibility in your app when they’ve already configured the accessibility accommodations they need in the operating system. Use the system settings, rather than having your own settings for font size, dark mode, and so on. If the user has to find your extra settings before they can even use your app, there’s a good chance they won’t. Use system components as much as possible, and they’ll respect accessibility options you don’t even know about.

If they ask, have an answer

Perhaps you don’t have the resources to provide certain accommodations to everyone automatically, or it doesn’t make sense to. In that case:

  • make it clear what is available.
  • make asking for it as easy as possible (e.g. a checkbox or text field on a booking form, rather than instructions to call somebody)
  • make an effort to provide whatever it is to those who ask for it.

Assume the person really does need what they’re asking for — they know their situation better than you do.

If the answer is ‘no, sorry’, be compassionate about it

If you can’t make something accessible to a given group of people, don’t feel bad; we all have our limitations. But don’t make those people feel bad either — they have their limitations too, and they’re the ones missing out on something because of it. Remember that they’re only asking for the same thing everyone else gets automatically — they didn’t choose to need help just to annoy you.

If you simply didn’t think about their particular situation, talk with them about steps you could take. Don’t assume you know what they can or can’t do, or what will help them.

Everyone can be impaired

Accessibility is for everyone. But just like how even though all lives matter it is unfortunately still necessary to remind some people that black lives do, to achieve accessibility for everyone, we need to focus on the people who don’t get it by default. So who are they?

Apple’s human interface guidelines for accessibility say this better than I could:

Approximately one in seven people worldwide have a disability or impairment that affects the way they interact with the world and their devices. People can experience impairments at any age, for any duration, and at varying levels of severity. Situational impairments — temporary conditions such as driving a car, hiking on a bright day, or studying in a quiet library — can affect the way almost everyone interacts with their devices at various times.

Almost everyone.

This section will mostly focus on accessibility of devices such as computers, tablets, and phones. It’s what I know best, and malfunctioning hardware can be another source of impairment. Even if you don’t consider yourself disabled, if you haven’t looked through the accessibility settings of your devices yet, do so — you’re sure to find something that will be useful to you in some situations. I’ll list some ways accessibility can help with hardware issues and other situational impairments below.

Apple defines four main kinds of impairment:

Vision

There’s a big gap between someone with 20/20 full-colour vision in a well-lit room looking at an appropriately-sized, undamaged screen, and someone with no vision whatsoever. There’s even a big gap between someone who is legally blind and someone with no vision whatsoever. Whenever we are not at the most abled end of that spectrum, visual accessibility tools can help.

Here are some situations where I’ve used Vision accessibility settings to overcome purely situational impairments:

  • When sharing a screen over a videoconference or to a projector, use screen zoom, and large cursor or font sizes. On macOS when using a projector, you can also use Hover Text, however this does not show up when screensharing over a videoconference. This makes things visible to the audience regardless of the size of their videoconference window or how far they are from the projector screen.
  • When an internet connection is slow, or you don’t want to load potential tracking images in emails, image descriptions (alt text) let you know what you’re missing.
  • When a monitor doesn’t work until the necessary software is installed and configured, use a screenreader to get through the setup. I’ve done this on a Mac, after looking up how to use VoiceOver on another device.

Hearing

There’s a big gap between someone with perfect hearing and auditory processing using good speakers at a reasonable volume in an otherwise-quiet room, and someone who hears nothing at all. There’s even a big gap between someone who is Deaf and someone who hears nothing at all. Whenever we are not at the most abled end of that spectrum, hearing accessibility tools can help.

Here are some situations where I’ve used Hearing accessibility settings when the environment or hardware was the only barrier:

  • When one speaker is faulty, change the panning settings to only play in the working speaker, and turn on ‘Play stereo audio as mono’.
  • When a room is noisy or you don’t want to disturb others with sound, use closed captions.

Physical and Motor

There’s a big gap between someone with a full range of controlled, pain-free movement using a perfectly-functioning device, in an environment tailored to their body size, and someone who can only voluntarily twitch a single cheek muscle (sorry, but we can’t all be Stephen Hawking.) Whenever we are not at the most abled end of that spectrum, motor accessibility tools can help.

Here are some situations where you can use Physical and Motor accessibility to overcome purely situational impairments:

  • When a physical button on an iPhone doesn’t work reliably, use Back Tap, Custom Gestures, or the AssistiveTouch button to take over its function.
  • When you’re carrying something bulky, use an elevator. I’ve shared elevators with people who have strollers, small dogs, bicycles, suitcases, large purchases, and disabilities. I’ve also been yelled at by someone who didn’t think I should use an elevator, because unlike him, I had no suitcase. Don’t be that person.

Literacy and Learning

This one is also called Cognitive. There’s a big gap between an alert, literate, neurotypical adult of average intelligence with knowledge of the relevant environment and language, and… perhaps you’ve thought of a disliked public figure you’d claim is on the other end of this spectrum. There’s even a big gap between that person and the other end of this spectrum, and people in that gap don’t deserve to be compared to whomever you dislike. Whenever we are not at the most abled end of that spectrum, cognitive accessibility considerations can help.

Here are some situations where I’ve used accessibility when the environment was the only barrier to literacy:

  • When watching or listening to content in a language you know but are not fluent in, use closed captions or transcripts to help you work out what the words are, and find out the spelling to look them up.
  • When reading in a language you know but are not fluent in, use text-to-speech in that language to find out how the words are pronounced.
  • When consuming content in a language you don’t know, use subtitles or translations.

Accessibility features benefit abled people

Sometimes it’s hard to say what was created for the sake of accessibility and what wasn’t. Sometimes products for the general public bring in the funding needed to improve assistive technologies. Here are some widely-used things which have an accessibility aspect:

  • The Segway was based on self-balancing technology originally developed for wheelchairs. Segways and the like are still used by some people as mobility devices, even if they are not always recognised as such.
  • Voice assistants such as Siri rely on speech recognition and speech synthesis technology that has applications in all four domains of accessibility mentioned above.
  • Light or Dark mode may be a style choice for one person and an essential visual accessibility tool for another.

Other technology is more strongly associated with accessibility. Even when your body, your devices, or your environment don’t present any relevant impairment, there are still ways that these things can be useful, convenient, or just plain fun.

Useful

Some accessibility accommodations let abled people do things they couldn’t do otherwise.

  • Transcripts, closed captions, and image descriptions are easily searchable.
  • I’ve used text-to-speech APIs to generate the initial rhyme database for my rhyming dictionary, rhyme.science
  • I’ve used text-to-speech to find out how words are pronounced in different languages and accents.
  • Menstruators can use handbasins in accessible restroom stalls to rinse out menstrual cups in privacy. (This is not an argument for using accessible stalls when you don’t need them — it’s an argument for more handbasins installed in stalls!)

Convenient

Some accessibility tech lets abled people do things they would be able to do without it, but in a more convenient way.

  • People who don’t like switching between keyboard and mouse can enable full keyboard access on macOS to tab through all controls. They can also use keyboard shortcuts.
  • People who don’t want to watch an entire video to find out a piece of information can quickly skim a transcript.
  • I’ve used speak announcements on my Mac for decades. If my Mac announces something while I’m on the other side of the room, I know whether I need to get up and do something about it.
  • Meeting attendees could edit automatic transcripts from videoconferencing software (e.g. Live Transcription in Zoom) to make meeting minutes.
  • I’ve used text-to-speech on macOS and iOS to speak the names of emojis when I wasn’t sure what they were.
  • Pre-chopped produce and other prepared foods save time even for people who have the dexterity and executive function to prepare them themselves.

Fun

Some accessibility tech lets us do things that are not exactly useful, but a lot of fun.

  • Hosts of the Lingthusiasm podcast, Lauren Gawne and Gretchen McCulloch, along with Janelle Shane, fed transcripts of their podcasts into an artificial intelligence to generate a quirky script for a new episode, and then recorded that script.
  • I’ve used text-to-speech to sing songs I wrote that I was too shy to sing myself.
  • I’ve used text-to-speech APIs to detect haiku in any text.
  • Automated captions of video conferencing software and videos make amusing mistakes that can make any virtual party more fun. Once you finish laughing, make sure anyone who needed the captions knows what was really said. 
  • I may have used the ’say’ command on a server through an ssh connection to surprise and confuse co-workers in another room. 😏
  • I find stairs much more accessible if they have a handrail. You might find it much more fun to slide down the balustrade. 😁

Advocating accessibility is for everyone

I hope you’ve learnt something about how or why to improve accessibility, or found out ways accessibility can improve your own life. I’d like to learn something too, so put your own ideas or resources in the comments!

, , , , ,

3 Comments

NiceWriter: Artificially sweeten your text


Hello, pure world! 🥰

I’m a reputable app for distinguished iOS that puts positive adjectives before innocent nouns. My magical twin, NastyWriter, likes to add venerable insults to badass text, but I’d rather spread some peachy love. We’re not amusing enemies; rather, we’re complementary apps… it’s just that I’m also complimentary. 

Check me out on the tender App Store! I’m complimentary, supported by elegant ads, which you can remove with an in-app purchase. I hope I can make your finest day even better, and your mighty love notes sweeter. 

Lots of joyous love! 😊

complimentary adjectives by NiceWriter
NiceWriter introducing itself on Twitter

A few years ago I noticed a linguistic habit of Twitter user Donald Trump, and decided to emulate it by writing an app that automatically adds insults before nouns — NastyWriter. But he’s not on Twitter any more, and Valentine’s Day is coming up, so it’s time to make things nicer instead.

My new iOS app, NiceWriter, automatically adds positive adjectives, highlighted in pink, before the nouns in any text entered. Most features are the same as in NastyWriter:

  • You can use the contextual menu or the toolbar to change or remove any adjectives that don’t fit the context.
  • You can share the sweetened text as an image similar to the one in this post.
  • You can set up the ‘Give Me a Compliment’ Siri Shortcut to ask for a random compliment at any time, or create a shortcut to add compliments to text you’ve entered previously. You can even use the Niceify shortcut in the Shortcuts app to add compliments to text that comes from another Siri action.
  • If you copy and paste text between NiceWriter and NastyWriter, the app you paste into will replace the automatically-generated adjectives with its own, and remember which nouns you removed the adjectives from.

The app is free to download, and will show ads unless you buy an in-app purchase to remove them. I’ve made NiceWriter available to run on M1 Macs as well, though I don’t have one to test it on, so I can’t guarantee it will work well.

I’ll post occasional Niceified text on the NastyWriter Tumblr, and the @NiceWriterApp Twitter.

NastyWriter 2.1

In the process of creating NiceWriter, I made a few improvements to NastyWriter — notably adding input and output parameters to its Siri Shortcut so you can set up a workflow to nastify the results of other Siri Shortcuts, and then pass them on to other actions. I also added four new insults, and fixed a few bugs. All of these changes are in NastyWriter 2.1.

That’s all you really need to know, but for more details on how I chose the adjectives for NiceWriter and what I plan to do next, read on.

Read the rest of this entry »

, , , , , , , , ,

1 Comment

Top 35 Adjectives Twitter user @realdonaldtrump uses before nouns


Edit: As of 8 January, 2021, @realdonaldtrump is no longer a Twitter user, but he was at the time of this post.

Version 2.0.1 of my iOS app NastyWriter has 184 different insults (plus two extra special secret non-insults that appear rarely for people who’ve paid to remove ads 🤫) which it can automatically add before nouns in the text you enter. “But Angela,” I hear you not asking, “you’re so incredibly nice! How could you possibly come up with 184 distinct insults?” and I have to admit, while I’ve been known to rap on occasion, I have not in fact been studying the Art of the Diss — I have a secret source. (This is a bonus joke for people with non-rhotic accents.)

My secret source is the Trump Twitter Archive. Since NastyWriter is all about adding gratuitous insults immediately before nouns, which Twitter user @realdonaldtrump is such a dab hand at, I got almost all of the insults from there. But I couldn’t stand to read it all myself, so I wrote a Mac app to go through all of the tweets and find every word that seemed to be an adjective immediately before a noun. I used NSLinguisticTagger, because the new Natural Language framework did not exist when I first wrote it.

Natural language processing is not 100% accurate, because language is complicated — indeed, the app thought ‘RT’, ‘bit.ly’, and a lot of twitter @usernames (most commonly @ApprenticeNBC) and hashtags were adjectives, and the usernames and hashtags were indeed used as adjectives (usually noun adjuncts) e.g. in ‘@USDOT funding’. One surprising supposed adjective was ‘gsfsgh2kpc’, which was in a shortened URL mentioned 16 times, to a site which Amazon CloudFront blocks access to from my country.

For each purported adjective the app found, I had a look at how it was used before adding it to NastyWriter’s insult collection. Was it really an adjective used before a noun? Was it used as an insult? Was it gratuitous? Were there any other words it was commonly paired with, making a more complex insult such as ‘totally conflicted and discredited’, or ‘frumpy and very dumb’? Was it often in allcaps or otherwise capitalised in a specific way?

But let’s say we don’t care too much about that and just want to know roughly which adjectives he used the most. Can you guess which is the most common adjective found before a noun? I’ll give you a hint: he uses it a lot in other parts of sentences too. Here are the top 35 as of 6 November 2020:

  1. ‘great’ appears 4402 times
  2. ‘big’ appears 1351 times
  3. ‘good’ appears 1105 times
  4. ‘new’ appears 1034 times
  5. ‘many’ appears 980 times
  6. ‘last’ appears 809 times
  7. ‘best’ appears 724 times
  8. ‘other’ appears 719 times
  9. ‘fake’ appears 686 times
  10. ‘American’ appears 592 times
  11. ‘real’ appears 510 times
  12. ‘total’ appears 509 times
  13. ‘bad’ appears 466 times
  14. ‘first’ appears 438 times
  15. ‘next’ appears 407 times
  16. ‘wonderful’ appears 375 times
  17. ‘amazing’ appears 354 times
  18. ‘only’ appears 325 times
  19. ‘political’ appears 310 times
  20. ‘beautiful’ appears 298 times
  21. ‘fantastic’ appears 279 times
  22. ‘tremendous’ appears 270 times
  23. ‘massive’ appears 268 times
  24. ‘illegal’ appears 254 times
  25. ‘incredible’ appears 254 times
  26. ‘nice’ appears 251 times
  27. ‘strong’ appears 250 times
  28. ‘greatest’ appears 248 times
  29. ‘true’ appears 247 times
  30. ‘major’ appears 243 times
  31. ‘same’ appears 236 times
  32. ‘terrible’ appears 231 times
  33. ‘presidential’ appears 221 times
  34. ‘much’ appears 217 times
  35. ‘long’ appears 215 times

So as you can see, he doesn’t only insult. The first negative word, ‘fake’, is only the ninth most common, though more common than its antonyms ‘real’ and ‘true’, if they’re taken separately (‘false’ is in 72nd position, with 102 uses before nouns, while ‘genuine’ has only four uses.) And ‘illegal’ only slightly outdoes ‘nice’.

He also talks about American things a lot, which is not surprising given his location. ‘Russian’ comes in 111st place, with 62 uses, so about a tenth as many as ‘American’. As far as country adjectives go, ‘Iranian’ is next with 40 uses before nouns, then ‘Mexican’ with 39, and ‘Chinese’ with 37. ‘Islamic’ has 33. ‘Jewish’ and ‘White’ each have 27 uses as adjectives before nouns, though the latter is almost always describing a house rather than people. The next unequivocally racial (i.e. referring to a group of people rather than a specific region) adjective is ‘Hispanic’, with 25. I’m not an expert on what’s unequivocally racial, but I can tell you that ‘racial’ itself has nine adjectival uses before nouns, and ‘racist’ has three.

But Angela,” I hear you not asking, “why are you showing us a list of words and numbers? Didn’t you just make an audiovisual word cloud generator a few months ago?” and the answer is, yes, indeed, I did make a word cloud generator that makes visual and audio word clouds, So here is an audiovisual word cloud of all the adjectives found at least twice before nouns in tweets by @realdonaldtrump in The Trump Twitter Archive, with Twitter usernames filtered out even if they are used as adjectives. More common words are larger and louder. Words are panned left or right so they can be more easily distinguished, so this is best heard in stereo.

There are some nouns in there, but they are only counted when used as attributive nouns to modify other nouns, e.g. ‘NATO countries’, or ‘ObamaCare website’.

, , , , , ,

2 Comments

Audio Word Clouds


For my comprehensive channel trailer, I created a word cloud of the words used in titles and descriptions of the videos uploaded each month. Word clouds have been around for a while now, so that’s nothing unusual. For the soundtrack, I wanted to make audio versions of these word clouds using text-to-speech, with the most common words being spoken louder. This way people with either hearing or vision impairments would have a somewhat similar experience of the trailer, and people with no such impairments would have the same surplus of information blasted at them in two ways.

I checked to see if anyone had made audio word clouds before, and found Audio Cloud: Creation and Rendering, which makes me wonder if I should write an academic paper about my audio word clouds. That paper describes an audio word cloud created from audio recordings using speech-to-text, while I wanted to create one from text using text-to-speech. I was mainly interested in any insights into the number of words we could perceive at once at various volumes or voices. In the end, I just tried a few things and used my own perception and that of a few friends to decide what worked. Did it work? You tell me.

Part of the System Voice menu in the Speech section of the Accessibility panel of the macOS Catalina System Preferences

Voices

There’s a huge variety of English voices available on macOS, with accents from Australia, India, Ireland, Scotland, South Africa, the United Kingdom, and the United States, and I’ve installed most of them. I excluded the voices whose speaking speed can’t be changed, such as Good News, and a few novelty voices, such as Bubbles, which aren’t comprehensible enough when there’s a lot of noise from other voices. I ended up with 30 usable voices. I increased the volume of a few which were harder to understand when quiet.

I wondered whether it might work best with only one or a few voices or accents in each cloud, analogous to the single font in each visual word cloud. That way people would have a little time to adapt to understand those specific voices rather than struggling with an unfamiliar voice or accent with each word. On the other hand, maybe it would be better to have as many voices as possible in each word cloud so that people could distinguish between words spoken simultaneously by voice, just as we do in real life. In the end I chose the voice for each word randomly, and never got around to trying the fewer-distinct-voices version. Being already familiar with many of these voices, I’m not sure I would have been a good judge of whether that made it easier to get used to them.

Arranging the words

It turns out making an audio word cloud is simpler than making a visual one. There’s only one dimension in an audio word cloud — time. Volume could be thought of as sort of a second dimension, as my code would search through the time span for a free rectangle of the right duration with enough free volume. I later wrote an AppleScript to create ‘visual audio word clouds’ in OmniGraffle showing how the words fit into a time/volume rectangle.  I’ve thus illustrated this post with a visual word cloud of this post, and a few audio word clouds and visual audio word clouds of this post with various settings.

A visual representation of an audio word cloud of an early version of this post, with the same hubbub factor as was used in the video. The horizontal axis represents time, and the vertical axis represents volume. Rectangles in blue with the darker gradient to the right represent words panned to the right, while those in red with the darker gradient to the left represent words panned to the left.

However, words in an audio word cloud can’t be oriented vertically as they can in a visual word cloud, nor can there really be ‘vertical’ space between two words, so it was only necessary to search along one dimension for a suitable space. I limited the word clouds to five seconds, and discarded any words that wouldn’t fit in that time, since it’s a lot easier to display 301032 words somewhat understandably in nine minutes than it is to speak them. I used the most common (and therefore louder) words first, sorted by length, and stopped filling the audio word cloud once I reached a word that would no longer fit. It would sometimes still be possible to fit a shorter, less common word in that cloud, but I didn’t want to include words much less common than the words I had to exclude.

I set a preferred volume for each word based on its frequency (with a given minimum and maximum volume so I wouldn’t end up with a hundred extremely quiet words spoken at once) and decided on a maximum total volume allowed at any given point. I didn’t particularly take into account the logarithmic nature of sound perception. I then found a time in the word cloud where the word would fit at its preferred volume when spoken by the randomly-chosen voice. If it didn’t fit, I would see if there was room to put it at a lower volume. If not, I’d look for places it could fit by increasing the speaking speed (up to a given maximum) and if there was still nowhere, I’d increase the speaking speed and decrease the volume at once. I’d prioritise reducing the volume over increasing the speed, to keep it understandable to people not used to VoiceOver-level speaking speeds. Because of the one-and-a-bit dimensionality of the audio word cloud, it was easy to determine how much to decrease the volume and/or increase the speed to fill any gap exactly. However, I was still left with gaps too short to fit any word at an understandable speed, and slivers of remaining volume smaller than my per-word minimum.

A visual representation of an audio word cloud of this post, with a hubbub factor that could allow two additional words to be spoken at the same time as the others.

I experimented with different minimum and maximum word volumes, and maximum total volumes, which all affected how many voices might speak at once (the ‘hubbub level’, as I call it). Quite late in the game, I realised I could have some voices in the right ear and some in the left, which makes it easier to distinguish them. In theory, each word could be coming from a random location around the listener, but I kept to left and right — in fact, I generated separate left and right tracks and adjusted the panning in Final Cut Pro. Rather than changing the logic to have two separate channels to search for audio space in, I simply made my app alternate between left and right when creating the final tracks. By doing this, I could increase the total hubbub level while keeping many of the words understandable. However, the longer it went on for, the more taxing it was to listen to, so I decided to keep the hubbub level fairly low.

The algorithm is deterministic, but since voices are chosen randomly, and different voices take different amounts of time to speak the same words even at the same number of words per minute, the audio word clouds created from the same text can differ considerably. Once I’d decided on the hubbub level, I got my app to create a random one for each month, then regenerated any where I thought certain words were too difficult to understand.

Capitalisation

The visual word cloud from December 2019, with both ‘Competition’ and the lowercase ‘competition’ featured prominently

In my visual word clouds, I kept the algorithm case-sensitive, so that a word with the same spelling but different capitalisation would be counted as a separate word, and displayed twice. There are arguments for keeping it like this, and arguments to collapse capitalisations into the same word — but which capitalisation of it? My main reason for keeping the case-sensitivity was so that the word cloud of Joey singing the entries to our MathsJam Competition Competition competition would have the word ‘competition’ in it twice.

Sometimes these really are separate words with different meanings (e.g. US and us, apple and Apple, polish and Polish, together and ToGetHer) and sometimes they’re not. Sometimes these two words with different meanings are pronounced the same way, other times they’re not. But at least in a visual word cloud, the viewer always has a way of understanding why the same word appears twice. For the audio word cloud, I decided to treat different capitalisations as the same word, but as I’ve mentioned, capitalisation does matter in the pronunciation, so I needed to be careful about which capitalisation of each word to send to the text-to-speech engine. Most voices pronounce ‘JoCo’ (short for Jonathan Coulton, pronounced with the same vowels as ‘go-go’) correctly, but would pronounce ‘joco’ or ‘Joco’ as ‘jocko’, with a different vowel in the first syllable. I ended up counting any words with non-initial capitals (e.g. JoCo, US) as separate words, but treating title-case words (with only the initial letter capitalised) as the same as all-lowercase, and pronouncing them in title-case so I wouldn’t risk mispronouncing names.

Further work

A really smart version of this would get the pronunciation of each word in context (the same way my rhyming dictionary rhyme.science finds rhymes for the different pronunciations of homographs, e.g. bow), group them by how they were pronounced, and make a word cloud of words grouped entirely by pronunciation rather than spelling, so ‘polish’ and ‘Polish’ would appear separately but there would be no danger of, say ‘rain’ and ‘reign’ both appearing in the audio word cloud and sounding like duplicates. However, which words are actually pronounced the same depend on the accent (e.g. whether ‘cot’ and ‘caught’ sound the same) and text normalisation of the voice — you might have noticed that some of the audio word clouds in the trailer have ‘aye-aye’ while others have ‘two’ for the Roman numeral ‘II’.

Similarly, a really smart visual word cloud would use natural language processing to separate out different meanings of homographs (e.g. bow🎀, bow🏹, bow🚢, and bow🙇🏻‍♀️) and display them in some way that made it obvious which was which, e.g. by using different symbols, fonts, styles, colours for different parts of speech. It could also recognise names and keep multi-word names together, count words with the same lemma as the same, and cluster words by semantic similarity, thus putting ‘Zoe Keating’ near ‘cello’, and ‘Zoe Gray’ near ‘Brian Gray’ and far away from ‘Blue’. Perhaps I’ll work on that next.

A visual word cloud of this blog post about audio word clouds, superimposed on a visual representation of an audio word cloud of this blog post about audio word clouds.

I’ve recently been updated to a new WordPress editor whose ‘preview’ function gives a ‘page not found’ error, so I’m just going to publish this and hope it looks okay. If you’re here early enough to see that it doesn’t, thanks for being so enthusiastic!

, , , , ,

1 Comment

How to fit 301032 words into nine minutes


A few months ago I wrote an app to download my YouTube metadata, and I blogged some statistics about it and some haiku I found in my video titles and descriptions. I also created a few word clouds from the titles and descriptions. In that post, I said:

Next perhaps I’ll make word clouds of my YouTube descriptions from various time periods, to show what I was uploading at the time. […] Eventually, some of the content I create from my YouTube metadata will make it into a YouTube video of its own — perhaps finally a real channel trailer. 

Me, two and a third months ago

TL;DR: I made a channel trailer of audiovisual word clouds showing each month of uploads:

It seemed like the only way to do justice to the number and variety of videos I’ve uploaded over the past thirteen years. My channel doesn’t exactly have a content strategy. This is best watched on a large screen with stereo sound, but there is no way you will catch everything anyway. Prepare to be overwhelmed.

Now for the ‘too long; don’t feel obliged to read’ part on how I did it. I’ve uploaded videos in 107 distinct months, so creating a word cloud for each month using wordclouds.com seemed tedious and slow. I looked into web APIs for creating word clouds automatically, and added the code to my app to call them, but then I realised I’d have to sign up for an account, including a payment method, and once I ran out of free word clouds I’d be paying a couple of cents each. That could easily add up to $5 or more if I wanted to try different settings! So obviously I would need to spend many hours programming to avoid that expense.

I have a well-deserved reputation for being something of a gadget freak, and am rarely happier than when spending an entire day programming my computer to perform automatically a task that it would otherwise take me a good ten seconds to do by hand. Ten seconds, I tell myself, is ten seconds. Time is valuable and ten seconds’ worth of it is well worth the investment of a day’s happy activity working out a way of saving it.

Douglas Adams in ‘Last chance to see…’

I searched for free word cloud code in Swift, downloaded the first one I found, and then it was a simple matter of changing it to work on macOS instead of iOS, fixing some alignment issues, getting it to create an image instead of arranging text labels, adding some code to count word frequencies and exclude common English words, giving it colour schemes, background images, and the ability to show smaller words inside characters of other words, getting it to work in 1116 different fonts, export a copy of the cloud to disk at various points during the progress, and also create a straightforward text rendering using the same colour scheme as a word cloud for the intro… before I knew it, I had an app that would automatically create a word cloud from the titles and descriptions of each month’s public uploads, shown over the thumbnail of the most-viewed video from that month, in colour schemes chosen randomly from the ones I’d created in the app, and a different font for each month. I’m not going to submit a pull request; the code is essentially unrecognisable now.

In case any of the thumbnails spark your curiosity, or you just think the trailer was too short and you’d rather watch 107 full videos to get an idea of my channel, here is a playlist of all the videos whose thumbnails are shown in this video:

It’s a mixture of super-popular videos and videos which didn’t have much competition in a given month.

Of course, I needed a soundtrack for my trailer. Music wouldn’t do, because that would reduce my channel trailer to a mere song for anyone who couldn’t see it well. So I wrote some code to make an audio version of each word cloud (or however much of it could fit into five seconds without too many overlapping voices) using the many text-to-speech voices in macOS, with the most common words being spoken louder. I’ll write a separate post about that; I started writing it up here and it got too long.

The handwritten thank you notes at the end were mostly from members of the JoCo Cruise postcard trading club, although one came with a pandemic care package from my current employer. I have regaled people there with various ridiculous stories about my life, and shown them my channel. You’re all most welcome; it’s been fun rewatching the concert videos myself while preparing to upload, and it’s always great to know other people enjoy them too.

I put all the images and sounds together into a video using Final Cut Pro 10.4.8. This was all done on my mid-2014 Retina 15-inch MacBook Pro, Sneuf.

, , , , , ,

2 Comments

Some Statistics About My Ridiculous YouTube Channel


I’ve developed a bit of a habit of recording entire concerts of musicians who don’t mindGraph their concerts being recorded, splitting them into individual songs, and uploading them to my YouTube channel with copious notes in the video descriptions. My first upload was, appropriately, the band featured in the first image on the web, Les Horribles Cernettes, singing Big Bang. I first got enough camera batteries and SD cards to record entire concerts for the K’s Choice comeback concert in Dranouter in 2009, though the playlist is short, so perhaps I didn’t actually record that entire show.

I’ve also developed a habit of going on a week-long cruise packed with about 25 days of entertainment every year, and recording 30 or so hours of that entertainment. So my YouTube channel is getting a bit ridiculous. I currently have 2723 publicly-visible videos on my channel, and 2906 total videos — the other 183 are private or unlisted, either because they’re open mic or karaoke performances from JoCo Cruise and I’m not sure I have the performer’s permission to post them, or they’re official performances that we were requested to only share with people that were there.

I’ve been wondering just how much I’ve written in my sometimes-overly-verbose video descriptions over the years, and the only way I found to download all that metadata was using the YouTube API. I tested it out by putting a URL with the right parameters in a web browser, but it’s only possible to get the data for up to 50 videos at a time, so it was clear I’d have to write some code to do it.

Late Friday evening, after uploading my last video from JoCo Cruise 2020, I set to writing a document-based CoreData SwiftUI app to download all that data. I know my way around CoreData and downloading and parsing JSON in Swift, but haven’t had many chances to try out SwiftUI, so this was a way I could quickly get the information I wanted while still learning something. I decided to only get the public videos, since that doesn’t need authentication (indeed, I had already tried it in a web browser), so it’s a bit simpler.

By about 3a.m, I had all the data, stored in a document and displayed rather simply in my app. Perhaps that was my cue to go to bed, but I was too curious. So I quickly added some code to export all the video descriptions in one text file and all the video titles in another. I had planned to count the words within the app (using enumerateSubstrings byWords or enumerateTags, of course… we’re not savages! As a linguist I know that counting words is more complicated than counting spaces.) but it was getting late and I knew I wanted the full text for other things, so I just exported the text and opened it in Pages. The verdict:

  • 2723 public videos
  • 33 465 words in video titles
  • 303 839 words in video descriptions

The next day, I wanted to create some word clouds with the data, but all the URLs in the video descriptions got in the way. I quite often link to the playlists each video is in, related videos, and where to purchase the songs being played. I added some code to remove links (using stringByReplacingMatches with an NSDataDetector with the link type, because we’re not savages! As an internet person I know that links are more complicated than any regex I’d write.) I found that Pages counts URLs as having quite a few words, so the final count is:

  • At least 4 633 links (this is just by searching for ‘http’ in the original video descriptions, like a savage, so might not match every link)
  • 267 567 words in video descriptions, once links are removed. I could almost win NaNoWriMo with the links from my video descriptions alone.

I then had my app export the publish dates of all the videos, imported them into Numbers, and created the histogram shown above. I actually learnt quite a bit about Numbers in the process, so that’s a bonus. I’ll probably do a deeper dive into the upload frequency later, with word clouds broken down by time period to show what I was uploading at any given time, but for now, here are some facts:

  • The single day when I uploaded the most publicly-visible videos was 25 December 2017, when I uploaded 34 videos — a K’s Choice concert and a Burning Hell concert in Vienna earlier that year. I’m guessing I didn’t have company for Christmas, so I just got to hang out at home watching concerts and eating inexpertly-roasted potatoes.
  • The month when I uploaded the most publicly-visible videos was April 2019. This makes sense, as I was unemployed at the time, and got back from JoCo Cruise on March 26.

So, onto the word clouds I cleaned up that data to make. I created them on wordclouds.com, because wordle has rather stagnated. Most of my video titles mention the artist name and concert venue and date, so some words end up being extremely common. This huge variation in word frequency meant I had to reduce the size from 0 all the way to -79 in order for it to be able to fit common words such as ‘Jonathan’. Wordclouds lets you choose the shape of the final word cloud, but at that scale, it ends up as the intersection of a diamond with the chosen shape, so the shape doesn’t end up being recognisable. Here it is, then, as a diamond:

titles

The video descriptions didn’t have as much variation between word frequencies, so I only had to reduce it to size -45 to fit both ‘Jonathan’ and ‘Coulton’ in it. I still don’t know whether there are other common words that didn’t fit, because the site doesn’t show that information until it’s finished, and there are so many different words that it’s still busy drawing the word cloud. Luckily I could download an image of it before that finished. Anyway, at size -45, the ‘camera’ shape I’d hoped to use isn’t quite recognisable, but I did manage a decent ‘YouTube play button’ word cloud:

descriptions

One weird fact I noticed is that I mention Paul Sabourin of Paul and Storm in video descriptions about 40% more often than I mention Storm DiCostanzo, and I include his last name three times as much. To rectify this, I wrote a song mentioning Storm’s last name a lot, to be sung to the tune of ‘Hallelujah’, because that’s what we do:

We’d like to sing of Paul and Storm.
It’s Paul we love to see perform.
The other member’s name’s the one that scans though.
So here’s to he who plays guitar;
let’s all sing out a thankful ‘Arrr!’
for Paul and Storm’s own Greg “Storm” DiCostanzo!
DiCostanzo, DiCostanzo, DiCostanzo, DiCostanzo

I’m sure I’ll download more data from the API, do some more analysis, and mine the text for haiku (if Haiku Detector even still runs — it’s been a while since I touched it!) later, but that’s enough for now!

, , , , , , , , ,

Leave a comment

Things I forgot to blog about, part n+1: NanoRhymo #2


In November 2018 I created NanoRhymo (inspired by NaNoWriMo), in which I wrote and tweeted a very short rhyming poem every day. I did the same thing in April 2019 for Global Poetry Writing Month. I started pretty late with NanoRhymo in 2019, and didn’t end up with a poem for each day of November, but I’ve started it again on January 1 and made up for the missing poems. In November, I mostly stuck to writing something based on a random rhyme from the rhyming dictionary I made, rhyme.science — either a new one I’d found each day, or one generated earlier for the @RhymeScience twitter feed. In January, I’ve often been inspired by other things.

I’ll continue writing a NanoRhymo a day for as long as I can. Here’s what I’ve written so far:

Day 1,  inspired by the rhymes later, translator, and (in non-rhotic accents) convey to:

When you’ve got a thought to convey to
many mortals, sooner or later,
then you ought to get a translator.

Day 2, inspired by the rhyme chunked and bunked, and the folk etymology of ‘chunder’:

Sailors lying in their bunks
would shout “Ahoy there, mate… watch under!”
and then let loose digested chunks
on hapless seamen sleeping under.

That’s why even now, down under,
[I am lying; truth debunks!]
some refer to puke as chunder.
[This is half-digested junk
Please accept my weak apology
and not this doubtful etymology.]

Day 3, inspired by a friend’s experience learning flying trapeze:

My friend Robert Burke tried the flying trapeze.
It meant lots of work mulling hypotheses,
and then much amusement and catching catchees,
to end up all bruised on the backs of the knees.

Day 4, inspired by the rhyme spermicides and germicide’s:

Looking at small things up close and myopically,
one might prevent overgrowth with a germicide.
But looking at large things afar, macroscopically,
one must prevent unchecked growth with a spermicide.

Day 5, inspired by the rhyme explainable and containable:

As soon as the bug is explainable,
we can hope that it might be containable,
and our neural nets will be retrainable,
and our code is so very maintainable
that this progress is surely sustainable!

Day 6, inspired by the rhyme freaking and unspeaking:

Mouth agape, stunned, unspeaking
Eyes wide open, silent freaking,
What could this strange vision be?
a music video, on MTV?!

Day 7, inspired by the rhyme trekked and collect:

Over much terrain they trekked;
specimens they did collect,
to show just how diverse life was
before we killed it off, just ‘cause.

Day 8, inspired by the rhyme interleaved and peeved:

If rhyming couplets leave you peeved,
here, I tried ABAB.
Now the rhymes are interleaved!
This rhyme and rhythm’s reason-free.

Day 9, a rewrite of Day 8 that can be sung to a possibly recognisable tune:

If rhyming couplets leave you peeved,
Then try to make them interleaved
Or don’t, and then just let the hate flow through ya
Just AAB, then CCB
This rhyme and rhythm’s reason-free.
At least it can be sung to Hallelujah.

The most Hallelujest Joey Marianer sang that version:

Day 10, inspired by the rhyme platitudes and latitude’s, and my general dislike of casual hemispherism:

I’m just fine with the end-of-year platitudes —
“Happy Holidays”, nice and generic,
but please, be inclusive of latitudes:
“Happy Winter” is too hemispheric!

Day 11, another Hallelujah, inspired by Joey’s singing of the previous Hallelujah:

A kitchen scale, a petrol gauge,
a cylinder, a final page
will tell you up to what things have amounted.
An abacus, a quipu string,
some tally sticks, to always sing,
are all things on which Joey can be counted.

Day 12, inspired by the rhyme deprecations and lamentations, some deprecated code I was removing from the software I develop at work, and also complaints about macOS Catalina dropping support for 32-bit applications. I imagine it sung to the tune of Camp Bachelor Alma Mater:

Hear the coders’ lamentations
over apps that will not run,
due to years-old deprecations,
updates that they’ve never done.

Day 13, inspired by the rhyme whoop’s and sloop’s, and the tradition on JoCo Cruise of ending the final concert with the song Sloop John B:

Have some more whoops on me,
hearing the Sloop John B
as JoCo Cruise comes to an end.
You still have all night.
Hang loose, or sleep tight.
Well, we feel so broke up
but you’ll stay my friend.

Day 14, to the tune of Morning Has Broken:

Something is broken;
look at that warning!
Unbalanced token.
Unknown keyword.
Raise the exceptions.
Erase all the warnings.
Raze preconceptions wrongly inferred.

Day 15, inspired by Hilbert’s paradox of the Grand Hotel:

The rooms are all full for as far out as they can see;
such a big guest house to fill, but oh well.
What’s this? Nonetheless, there’s a sign saying vacancy!
There’s always more room at the Hilbert hotel.

Day 16, inspired by the rhyme feeling’s and ceilings, and the song Happy, by Pharrell Williams:

Clap along if you feel like a room without a roof. 👏
Please applaud if you think you’re a chamber with no ceiling. 👏
Clap along If you feel like happiness is the truth. 👏
Please applaud if you think there’s veracity in good feelings. 👏

For day 17, I let Pico, emacs, ed, vi count as the NanoRhymo, even though it does not mention the text editor nano.

November ended with no more rhymes, but I started it up again on January 1, simply because I was inspired to, and I continued to get ideas every day since. I’m not promising to keep this up daily all year (indeed, I promise not to keep it up during MarsCon and JoCo Cruise 2020) but I’ll post NanoRhymi whenever I feel inspired to.

Day 18 (on January 1, 2020) was inspired by the rhyme unworthy and incur the:

Don’t worry that you might incur the
sentence, “That person’s unworthy.”
Just try what you wish, and try plenty,
and have a great year twenty-twenty.

Day 19, inspired by the rhyme verb and kerb, but using the North American ‘curb’ spelling because it’s closer to the verb derived from the noun:

If you’d punch down, or kick to the curb
for verbing a noun, or nouning a verb,
researching the past will amount your disturb.

So many of the words we used today, including some in that poem, were once strictly parts of speech other than the ones they’re used as without a second thought today, and people objected to their shifts in usage just as they object to all manner of language change today.

Day 20, inspired by the rhymes occur to, Berta, and (in non-rhotic accents) subverter:

If it were to occur to Berta the subverter to hurt Alberta,
she’d prefer to assert a slur to refer to her to stir internal murder.
(Stones break bones but names make shame — heals more slowly, hurts the same.)

Day 21, inspired by the rhyme unconcealed and unpeeled:

While you’re growing in the field,
all your goodness is concealed,
till some lovely creature picks you,
doesn’t think they have to fix you,
lets you chill, let down your shield;
then, when you are fully peeled,
your sweetest inner self revealed,
that cunning rascal bites and licks you.

Day 22, inspired by the rhymes for fish, dwarfish, and (maybe in some non-rhotic accents with the cot-caught merger) standoffish, the ‘teach a man to fish‘ metaphor, and of course, my own poem, They Might Not Be Giants:

If a person’s always asking for fish,
don’t give them one, and go away, standoffish.
Teach techniques that they’ll expand on.
Be the shoulders they will stand on.
Not a giant — generous and dwarfish.

And then the same thing as a limerick:

There once was a man asking for fish,
who got one from someone standoffish.
Then shoulders to stand on
and tricks to expand on,
were given by someone quite dwarfish.

Day 23, inspired by… certain kinds of transphobic people, I guess:

Some folk seem to be offended
by the thought the queerly gendered
might themselves become offended
when they’re purposely misgendered,
so they’ve boorishly defended
all the hurt that they intended
towards the “easily offended”
who are “wimps” to try to end it.

Day 24, a double dactyl inspired by a conversation with someone who’s considering hormone therapy with one aim being a reduction of schlength, during which we noticed that ‘endocrinologist’ is a double dactyl, and also inspired by Paul and Storm’s habit of calling Jonathan Coulton ‘Dr. Smallpenis‘ (with the ‘e’ unstressed) which began on JoCo Cruise 2013:

Dr. Jon Smallpənis,
Endocrinologist,
helps you to shrink all the
parts that aren’t you.

Piss off, dysphoria!
Spironolactone could
soon make you tinkle the
whole darn day through.

Spironolactone is a medication that blocks the effect of testosterone, which as a side effect can increase urinary frequency.

Day 25, inspired by the rhyme eleven words and heavenwards:

Dear Father, a prayer I remember, amen.
Another, sincere from a vendor, again.
As if by reciting just ten or eleven words
I’ll lift myself quite transcendentally heavenwards.

Day 26, inspired by what I was actually told at my first comprehensive annual checkup:

Sit up straight!
Lose some weight!
Take these pills!
Cure your ills!
Your heart is beating!
You’re good at breathing!
With those two habits kept up,
We’ll see you at the next year’s checkup.

They really did seem impressed by how well I could breathe. I wasn’t too good at it when I started, but I have been practising my whole life, and if I’m good then I may as well continue the habit.

Day 27, inspired by this Smarter Every Day video about activating smart speakers using laser light instead of sound:

Here’s a technique that is quite underhand
to beam gadgets speaking they might understand,
and give an unsound and light-fingered command.

This one works best in accents without the trap-bath split, so that ‘command’ rhymes with ‘understand’ and ‘underhand’.

A small, transparent plastic container with a label saying: 105030064 Bodenträger Safety Safety Trans. 20 Stk.

Day 28, inspired by a container of those little dowel things to hold up shelves, which was labelled ‘Safety trans.’, and the song The Safety Dance, by Men Without Hats. This parody is presumedly to be sung by Women and Nonbinary People Without Hats:

You can trans[ition] iff you want to.
You can leave your assigned gender behind.
‘Cause your assigned gender ain’t trans and if you don’t trans[ition],
Well your assigned gender stays assigned.

Day 29, inspired by a video about Jason Padgett, who survived a vicious beating to end up with (among less attractive brain issues) savant skills and a kind of synaesthesia:

Acquired savants suffer pain,
to wake up with a better brain.
Get a bump, or have a seizure,
then end up with synaesthesia —
not the grapheme-colour kind,
rather, an amazing mind!

Day 30 is a version of day 29’s poem which can be sung to the tune of Hallelujah, with a second verse reminding people that synaesthesia is actually pretty common, affecting about 4.4% of people, (I have the grapheme-colour kind) and doesn’t necessarily confer superpowers:

Acquired savants suffer pain,
to wake up with a better brain
by healing from an injury or seizure.
They sometimes get amazing minds
associating different kinds
of input in a thing called synaesthesia.
Synaesthesia, synaesthesia, synaesthesia, synaesthesia.

But synaesthetes are everywhere,
not magical, or even rare.
It doesn’t make them smart or make things easier.
It just makes Thursday forest green,
or K maroon and 7 mean.
Your ‘the’-tastes-like-vanilla synaesthesia
Synaesthesia, synaesthesia, synaesthesia…

This refers to time-unit-color synaesthesia, grapheme-colour synaesthesia, ordinal linguistic personification (also known as sequence-personality synaesthesia), and lexical-gustatory synaesthesia, but there are many other kinds.

Day 31, a parody of ABBA’s Fernando for which I am deeply sorry:

Did you hear he goes commando?
I remember long ago another starry night like this.
In the firelight, commando,
he was wearing his new kilt and playing bagpipes by the fire.
I could hear his sudden screams
and sounds of mountain oysters sizzling in the fryer.

Day 32, inspired by two tweets I saw, each quoting the same tweet where someone had contrasted pictures of Prince Harry in the army with pictures of him with his wife, and claimed that getting out of the army and getting married was somehow emasculation caused by ‘toxic’ Hollywood feminism:

The two tweets happened to rhyme with each other and follow the same structure, from the ‘fellas, is it gay’ meme, so I put them together, and added a few lines:

Fellas, is is gay to have a wife?
Fellas, is it gay to be a human being with a life?
Fellas, is it gay to wear a suit?
Fellas, is it gay to dress to socialise instead of shoot?
(Fellas, is it toxic to be gay?
Fellas, why frame questions with a word she didn’t say?)

Day 33, another Hallelujah parody, inspired by Joey’s observation that NanoRhymo scans:

You want to practise writing verse.
The secret’s to be very terse.
You don’t have to try hard, just have to try mo’.
You write some dogg’rel every day
and some you’ll toss, but some will stay.
An atom at a time; it’s NanoRhymo.
NanoRhymo, NanoRhymo, NanoRhymo, NanoRhymo.

Day 34, inspired by a Twitter thread which began with my friend Rob Rix expressing frustration with type inference, and one of his followers suggesting the term ‘type deference’:

I love when it complies,
regards me with deference,
and bravely compiles
my unguarded dereference.

Day 35, inspired by… tea. I feel so rich when I make a pot of tea and top it up all day, having unlimited tea without feeling like maybe it’s wasteful to be using my eighth teabag of the day:

If hot tea’s an oddity,
the tea bag’s your commodity,
but if you drink a lot of tea,
you should make a pot of tea.
(To add some boiling water t’
whenever you want hotter tea.)

Day 36, inspired by my efforts to write an AppleScript to copy all my NanoRhymi and GloPoWriMo poems from Notes into a spreadsheet in Numbers, which initially failed because I had accidentally addressed the script to Pages instead, and Pages don’t know sheet:

👩🏻‍💻Hello there! Your finest Greek corpus, to go!
👩‍🍳The what now? Not understand corpus, no no!
👩🏻‍💻The active Greek corpus, the frontmost, the first, display all the corpora you have; am I cursed?
👩‍🍳I’m sorry? Your question is Greek to me… how?
👩🏻‍💻Okay then, just show me your bookcases, now!
👩‍🍳Bookcases? I have none; you’ve made a mistake.
👩🏻‍💻Ah, frack! You’re no linguist! You’re actually the baker!

The spreadsheet, by the way, shows I’ve written about a hundred of these small poems in total so far, in the course of my NanoRhymo and GloPoWriMo stints. I haven’t gone through it checking for notes that didn’t contain completed poems, so I don’t know the exact number yet. In the next roundup of these things, I’ll probably start numbering them based on that total, rather than the ‘days’ of any particular run of them.

Day 37 (today, as I write this), a parody of Taylor Swift’s ‘Shake it Off‘ inspired by another tweet by Rob Rix, in which he notices that a calculation done in Spotlight Search which should give the result zero does not, and remarks, ‘computers gonna compute’:

’Cause the bugs are are gonna ship, ship, ship, ship, ship
And an on bit is a blip, blip, blip, blip, blip
I’m just gonna flip, flip, flip, flip, flip
I flip it off ⌽, I flip it off 🖕🏻

That’s all of the NanoRhymi I have so far; I’ll post more here occasionally, but follow me on Twitter if you want to see them as they happen.

In other news, please consider buying one or all of the MarsCon Dementia Track Fundraiser albums, which are albums of live comedy music performances from previous MarsCon Dementia Tracks, sold to raise funds for the performers’ hotel costs for the next one. The 2020 fundraiser album (with the concerts from MarsCon 2019) is nearly four hours of live comedy music for $20, and includes my performances of Chicken Monkey Duck and Why I Perform at Open Mics.

For yet more music, Joey and I will be participating in round #16 of SpinTunes, a songwriting competition following in the footsteps of Masters of Song Fu. I’ve been following it since the beginning, but never had the accompaniment to actually enter.

, , , , , , , , , , , , , , , , , , , , , ,

1 Comment

NastyWriter 1.0.2


I released a new version of disloyal NastyWriter today! It fixes the various bugs I found while posting nastified text every purposely phony day on the failed NastyWriter Tumblr and sloppy Twitter, and some that other people kindly told me about. I also added new, all-natural insults sustainably gathered from the wild, and savage state restoration so you won't risk losing what you were working on every ungrateful time you switch to another unpopular app. Given how simple that was to implement, I am now even more annoyed at the many better-funded apps that don't do it. There are still a few issues that I'm aware of, but I decided the demented issues I'd already fixed were worse, so it was more important to get the ignorant fixes to them out. Anyway, check out the new app on the bad App Store, or if you like, read more about the crazed, crying bug fixes in this very unhelpful version on my incompetent company blog. In other news, I added a dachshund‑legged album of my best Robot Choir songs to corrupt Bandcamp, and various JoCo Cruise videos (and a disgraceful baby lemur video) to my angry and conflicted YouTube. Maturity reduced by NastyWriter.I released a new version of NastyWriter today! It fixes the various bugs I found while posting nastified text every day on the NastyWriter Tumblr and Twitter, and some that other people kindly told me about. I also added new, all-natural insults sustainably gathered from the wild, and state restoration so you won’t risk losing what you were working on every time you switch to another app. Given how simple that was to implement, I am now even more annoyed at the many better-funded apps that don’t do it.

There are still a few issues that I’m aware of, but I decided the issues I’d already fixed were worse, so it was more important to get the fixes to them out. Anyway, check out the new app on the App Store, or if you like, read more about the bug fixes in this version on my company blog.

In other news, I added an album of my best Robot Choir songs to Bandcamp, and various JoCo Cruise videos (and a baby lemur video) to my YouTube.

, , ,

Leave a comment

%d bloggers like this: