Archive for category News

Audio Word Clouds


For my comprehensive channel trailer, I created a word cloud of the words used in titles and descriptions of the videos uploaded each month. Word clouds have been around for a while now, so that’s nothing unusual. For the soundtrack, I wanted to make audio versions of these word clouds using text-to-speech, with the most common words being spoken louder. This way people with either hearing or vision impairments would have a somewhat similar experience of the trailer, and people with no such impairments would have the same surplus of information blasted at them in two ways.

I checked to see if anyone had made audio word clouds before, and found Audio Cloud: Creation and Rendering, which makes me wonder if I should write an academic paper about my audio word clouds. That paper describes an audio word cloud created from audio recordings using speech-to-text, while I wanted to create one from text using text-to-speech. I was mainly interested in any insights into the number of words we could perceive at once at various volumes or voices. In the end, I just tried a few things and used my own perception and that of a few friends to decide what worked. Did it work? You tell me.

Part of the System Voice menu in the Speech section of the Accessibility panel of the macOS Catalina System Preferences

Voices

There’s a huge variety of English voices available on macOS, with accents from Australia, India, Ireland, Scotland, South Africa, the United Kingdom, and the United States, and I’ve installed most of them. I excluded the voices whose speaking speed can’t be changed, such as Good News, and a few novelty voices, such as Bubbles, which aren’t comprehensible enough when there’s a lot of noise from other voices. I ended up with 30 usable voices. I increased the volume of a few which were harder to understand when quiet.

I wondered whether it might work best with only one or a few voices or accents in each cloud, analogous to the single font in each visual word cloud. That way people would have a little time to adapt to understand those specific voices rather than struggling with an unfamiliar voice or accent with each word. On the other hand, maybe it would be better to have as many voices as possible in each word cloud so that people could distinguish between words spoken simultaneously by voice, just as we do in real life. In the end I chose the voice for each word randomly, and never got around to trying the fewer-distinct-voices version. Being already familiar with many of these voices, I’m not sure I would have been a good judge of whether that made it easier to get used to them.

Arranging the words

It turns out making an audio word cloud is simpler than making a visual one. There’s only one dimension in an audio word cloud — time. Volume could be thought of as sort of a second dimension, as my code would search through the time span for a free rectangle of the right duration with enough free volume. I later wrote an AppleScript to create ‘visual audio word clouds’ in OmniGraffle showing how the words fit into a time/volume rectangle.  I’ve thus illustrated this post with a visual word cloud of this post, and a few audio word clouds and visual audio word clouds of this post with various settings.

A visual representation of an audio word cloud of an early version of this post, with the same hubbub factor as was used in the video. The horizontal axis represents time, and the vertical axis represents volume. Rectangles in blue with the darker gradient to the right represent words panned to the right, while those in red with the darker gradient to the left represent words panned to the left.

However, words in an audio word cloud can’t be oriented vertically as they can in a visual word cloud, nor can there really be ‘vertical’ space between two words, so it was only necessary to search along one dimension for a suitable space. I limited the word clouds to five seconds, and discarded any words that wouldn’t fit in that time, since it’s a lot easier to display 301032 words somewhat understandably in nine minutes than it is to speak them. I used the most common (and therefore louder) words first, sorted by length, and stopped filling the audio word cloud once I reached a word that would no longer fit. It would sometimes still be possible to fit a shorter, less common word in that cloud, but I didn’t want to include words much less common than the words I had to exclude.

I set a preferred volume for each word based on its frequency (with a given minimum and maximum volume so I wouldn’t end up with a hundred extremely quiet words spoken at once) and decided on a maximum total volume allowed at any given point. I didn’t particularly take into account the logarithmic nature of sound perception. I then found a time in the word cloud where the word would fit at its preferred volume when spoken by the randomly-chosen voice. If it didn’t fit, I would see if there was room to put it at a lower volume. If not, I’d look for places it could fit by increasing the speaking speed (up to a given maximum) and if there was still nowhere, I’d increase the speaking speed and decrease the volume at once. I’d prioritise reducing the volume over increasing the speed, to keep it understandable to people not used to VoiceOver-level speaking speeds. Because of the one-and-a-bit dimensionality of the audio word cloud, it was easy to determine how much to decrease the volume and/or increase the speed to fill any gap exactly. However, I was still left with gaps too short to fit any word at an understandable speed, and slivers of remaining volume smaller than my per-word minimum.

A visual representation of an audio word cloud of this post, with a hubbub factor that could allow two additional words to be spoken at the same time as the others.

I experimented with different minimum and maximum word volumes, and maximum total volumes, which all affected how many voices might speak at once (the ‘hubbub level’, as I call it). Quite late in the game, I realised I could have some voices in the right ear and some in the left, which makes it easier to distinguish them. In theory, each word could be coming from a random location around the listener, but I kept to left and right — in fact, I generated separate left and right tracks and adjusted the panning in Final Cut Pro. Rather than changing the logic to have two separate channels to search for audio space in, I simply made my app alternate between left and right when creating the final tracks. By doing this, I could increase the total hubbub level while keeping many of the words understandable. However, the longer it went on for, the more taxing it was to listen to, so I decided to keep the hubbub level fairly low.

The algorithm is deterministic, but since voices are chosen randomly, and different voices take different amounts of time to speak the same words even at the same number of words per minute, the audio word clouds created from the same text can differ considerably. Once I’d decided on the hubbub level, I got my app to create a random one for each month, then regenerated any where I thought certain words were too difficult to understand.

Capitalisation

The visual word cloud from December 2019, with both ‘Competition’ and the lowercase ‘competition’ featured prominently

In my visual word clouds, I kept the algorithm case-sensitive, so that a word with the same spelling but different capitalisation would be counted as a separate word, and displayed twice. There are arguments for keeping it like this, and arguments to collapse capitalisations into the same word — but which capitalisation of it? My main reason for keeping the case-sensitivity was so that the word cloud of Joey singing the entries to our MathsJam Competition Competition competition would have the word ‘competition’ in it twice.

Sometimes these really are separate words with different meanings (e.g. US and us, apple and Apple, polish and Polish, together and ToGetHer) and sometimes they’re not. Sometimes these two words with different meanings are pronounced the same way, other times they’re not. But at least in a visual word cloud, the viewer always has a way of understanding why the same word appears twice. For the audio word cloud, I decided to treat different capitalisations as the same word, but as I’ve mentioned, capitalisation does matter in the pronunciation, so I needed to be careful about which capitalisation of each word to send to the text-to-speech engine. Most voices pronounce ‘JoCo’ (short for Jonathan Coulton, pronounced with the same vowels as ‘go-go’) correctly, but would pronounce ‘joco’ or ‘Joco’ as ‘jocko’, with a different vowel in the first syllable. I ended up counting any words with non-initial capitals (e.g. JoCo, US) as separate words, but treating title-case words (with only the initial letter capitalised) as the same as all-lowercase, and pronouncing them in title-case so I wouldn’t risk mispronouncing names.

Further work

A really smart version of this would get the pronunciation of each word in context (the same way my rhyming dictionary rhyme.science finds rhymes for the different pronunciations of homographs, e.g. bow), group them by how they were pronounced, and make a word cloud of words grouped entirely by pronunciation rather than spelling, so ‘polish’ and ‘Polish’ would appear separately but there would be no danger of, say ‘rain’ and ‘reign’ both appearing in the audio word cloud and sounding like duplicates. However, which words are actually pronounced the same depend on the accent (e.g. whether ‘cot’ and ‘caught’ sound the same) and text normalisation of the voice — you might have noticed that some of the audio word clouds in the trailer have ‘aye-aye’ while others have ‘two’ for the Roman numeral ‘II’.

Similarly, a really smart visual word cloud would use natural language processing to separate out different meanings of homographs (e.g. bow🎀, bow🏹, bow🚢, and bow🙇🏻‍♀️) and display them in some way that made it obvious which was which, e.g. by using different symbols, fonts, styles, colours for different parts of speech. It could also recognise names and keep multi-word names together, count words with the same lemma as the same, and cluster words by semantic similarity, thus putting ‘Zoe Keating’ near ‘cello’, and ‘Zoe Gray’ near ‘Brian Gray’ and far away from ‘Blue’. Perhaps I’ll work on that next.

A visual word cloud of this blog post about audio word clouds, superimposed on a visual representation of an audio word cloud of this blog post about audio word clouds.

I’ve recently been updated to a new WordPress editor whose ‘preview’ function gives a ‘page not found’ error, so I’m just going to publish this and hope it looks okay. If you’re here early enough to see that it doesn’t, thanks for being so enthusiastic!

, , , , ,

Leave a comment

How to fit 301032 words into nine minutes


A few months ago I wrote an app to download my YouTube metadata, and I blogged some statistics about it and some haiku I found in my video titles and descriptions. I also created a few word clouds from the titles and descriptions. In that post, I said:

Next perhaps I’ll make word clouds of my YouTube descriptions from various time periods, to show what I was uploading at the time. […] Eventually, some of the content I create from my YouTube metadata will make it into a YouTube video of its own — perhaps finally a real channel trailer. 

Me, two and a third months ago

TL;DR: I made a channel trailer of audiovisual word clouds showing each month of uploads:

It seemed like the only way to do justice to the number and variety of videos I’ve uploaded over the past thirteen years. My channel doesn’t exactly have a content strategy. This is best watched on a large screen with stereo sound, but there is no way you will catch everything anyway. Prepare to be overwhelmed.

Now for the ‘too long; don’t feel obliged to read’ part on how I did it. I’ve uploaded videos in 107 distinct months, so creating a word cloud for each month using wordclouds.com seemed tedious and slow. I looked into web APIs for creating word clouds automatically, and added the code to my app to call them, but then I realised I’d have to sign up for an account, including a payment method, and once I ran out of free word clouds I’d be paying a couple of cents each. That could easily add up to $5 or more if I wanted to try different settings! So obviously I would need to spend many hours programming to avoid that expense.

I have a well-deserved reputation for being something of a gadget freak, and am rarely happier than when spending an entire day programming my computer to perform automatically a task that it would otherwise take me a good ten seconds to do by hand. Ten seconds, I tell myself, is ten seconds. Time is valuable and ten seconds’ worth of it is well worth the investment of a day’s happy activity working out a way of saving it.

Douglas Adams in ‘Last chance to see…’

I searched for free word cloud code in Swift, downloaded the first one I found, and then it was a simple matter of changing it to work on macOS instead of iOS, fixing some alignment issues, getting it to create an image instead of arranging text labels, adding some code to count word frequencies and exclude common English words, giving it colour schemes, background images, and the ability to show smaller words inside characters of other words, getting it to work in 1116 different fonts, export a copy of the cloud to disk at various points during the progress, and also create a straightforward text rendering using the same colour scheme as a word cloud for the intro… before I knew it, I had an app that would automatically create a word cloud from the titles and descriptions of each month’s public uploads, shown over the thumbnail of the most-viewed video from that month, in colour schemes chosen randomly from the ones I’d created in the app, and a different font for each month. I’m not going to submit a pull request; the code is essentially unrecognisable now.

In case any of the thumbnails spark your curiosity, or you just think the trailer was too short and you’d rather watch 107 full videos to get an idea of my channel, here is a playlist of all the videos whose thumbnails are shown in this video:

It’s a mixture of super-popular videos and videos which didn’t have much competition in a given month.

Of course, I needed a soundtrack for my trailer. Music wouldn’t do, because that would reduce my channel trailer to a mere song for anyone who couldn’t see it well. So I wrote some code to make an audio version of each word cloud (or however much of it could fit into five seconds without too many overlapping voices) using the many text-to-speech voices in macOS, with the most common words being spoken louder. I’ll write a separate post about that; I started writing it up here and it got too long.

The handwritten thank you notes at the end were mostly from members of the JoCo Cruise postcard trading club, although one came with a pandemic care package from my current employer. I have regaled people there with various ridiculous stories about my life, and shown them my channel. You’re all most welcome; it’s been fun rewatching the concert videos myself while preparing to upload, and it’s always great to know other people enjoy them too.

I put all the images and sounds together into a video using Final Cut Pro 10.4.8. This was all done on my mid-2014 Retina 15-inch MacBook Pro, Sneuf.

, , , , , ,

Leave a comment

Some Statistics About My Ridiculous YouTube Channel


I’ve developed a bit of a habit of recording entire concerts of musicians who don’t mindGraph their concerts being recorded, splitting them into individual songs, and uploading them to my YouTube channel with copious notes in the video descriptions. My first upload was, appropriately, the band featured in the first image on the web, Les Horribles Cernettes, singing Big Bang. I first got enough camera batteries and SD cards to record entire concerts for the K’s Choice comeback concert in Dranouter in 2009, though the playlist is short, so perhaps I didn’t actually record that entire show.

I’ve also developed a habit of going on a week-long cruise packed with about 25 days of entertainment every year, and recording 30 or so hours of that entertainment. So my YouTube channel is getting a bit ridiculous. I currently have 2723 publicly-visible videos on my channel, and 2906 total videos — the other 183 are private or unlisted, either because they’re open mic or karaoke performances from JoCo Cruise and I’m not sure I have the performer’s permission to post them, or they’re official performances that we were requested to only share with people that were there.

I’ve been wondering just how much I’ve written in my sometimes-overly-verbose video descriptions over the years, and the only way I found to download all that metadata was using the YouTube API. I tested it out by putting a URL with the right parameters in a web browser, but it’s only possible to get the data for up to 50 videos at a time, so it was clear I’d have to write some code to do it.

Late Friday evening, after uploading my last video from JoCo Cruise 2020, I set to writing a document-based CoreData SwiftUI app to download all that data. I know my way around CoreData and downloading and parsing JSON in Swift, but haven’t had many chances to try out SwiftUI, so this was a way I could quickly get the information I wanted while still learning something. I decided to only get the public videos, since that doesn’t need authentication (indeed, I had already tried it in a web browser), so it’s a bit simpler.

By about 3a.m, I had all the data, stored in a document and displayed rather simply in my app. Perhaps that was my cue to go to bed, but I was too curious. So I quickly added some code to export all the video descriptions in one text file and all the video titles in another. I had planned to count the words within the app (using enumerateSubstrings byWords or enumerateTags, of course… we’re not savages! As a linguist I know that counting words is more complicated than counting spaces.) but it was getting late and I knew I wanted the full text for other things, so I just exported the text and opened it in Pages. The verdict:

  • 2723 public videos
  • 33 465 words in video titles
  • 303 839 words in video descriptions

The next day, I wanted to create some word clouds with the data, but all the URLs in the video descriptions got in the way. I quite often link to the playlists each video is in, related videos, and where to purchase the songs being played. I added some code to remove links (using stringByReplacingMatches with an NSDataDetector with the link type, because we’re not savages! As an internet person I know that links are more complicated than any regex I’d write.) I found that Pages counts URLs as having quite a few words, so the final count is:

  • At least 4 633 links (this is just by searching for ‘http’ in the original video descriptions, like a savage, so might not match every link)
  • 267 567 words in video descriptions, once links are removed. I could almost win NaNoWriMo with the links from my video descriptions alone.

I then had my app export the publish dates of all the videos, imported them into Numbers, and created the histogram shown above. I actually learnt quite a bit about Numbers in the process, so that’s a bonus. I’ll probably do a deeper dive into the upload frequency later, with word clouds broken down by time period to show what I was uploading at any given time, but for now, here are some facts:

  • The single day when I uploaded the most publicly-visible videos was 25 December 2017, when I uploaded 34 videos — a K’s Choice concert and a Burning Hell concert in Vienna earlier that year. I’m guessing I didn’t have company for Christmas, so I just got to hang out at home watching concerts and eating inexpertly-roasted potatoes.
  • The month when I uploaded the most publicly-visible videos was April 2019. This makes sense, as I was unemployed at the time, and got back from JoCo Cruise on March 26.

So, onto the word clouds I cleaned up that data to make. I created them on wordclouds.com, because wordle has rather stagnated. Most of my video titles mention the artist name and concert venue and date, so some words end up being extremely common. This huge variation in word frequency meant I had to reduce the size from 0 all the way to -79 in order for it to be able to fit common words such as ‘Jonathan’. Wordclouds lets you choose the shape of the final word cloud, but at that scale, it ends up as the intersection of a diamond with the chosen shape, so the shape doesn’t end up being recognisable. Here it is, then, as a diamond:

titles

The video descriptions didn’t have as much variation between word frequencies, so I only had to reduce it to size -45 to fit both ‘Jonathan’ and ‘Coulton’ in it. I still don’t know whether there are other common words that didn’t fit, because the site doesn’t show that information until it’s finished, and there are so many different words that it’s still busy drawing the word cloud. Luckily I could download an image of it before that finished. Anyway, at size -45, the ‘camera’ shape I’d hoped to use isn’t quite recognisable, but I did manage a decent ‘YouTube play button’ word cloud:

descriptions

One weird fact I noticed is that I mention Paul Sabourin of Paul and Storm in video descriptions about 40% more often than I mention Storm DiCostanzo, and I include his last name three times as much. To rectify this, I wrote a song mentioning Storm’s last name a lot, to be sung to the tune of ‘Hallelujah’, because that’s what we do:

We’d like to sing of Paul and Storm.
It’s Paul we love to see perform.
The other member’s name’s the one that scans though.
So here’s to he who plays guitar;
let’s all sing out a thankful ‘Arrr!’
for Paul and Storm’s own Greg “Storm” DiCostanzo!
DiCostanzo, DiCostanzo, DiCostanzo, DiCostanzo

I’m sure I’ll download more data from the API, do some more analysis, and mine the text for haiku (if Haiku Detector even still runs — it’s been a while since I touched it!) later, but that’s enough for now!

 

, , , , , , , , ,

Leave a comment

The Impossible Journey (a song)


With The Terrible Trivium being a little too tedious for the judges’ tastes, The Quantifiers were eliminated from round 2 of SpinTunes #16, but the competition encourages ‘shadow’ entries from people not competing, so we wrote a song for the next round anyway. The challenge was:

Write an uplifting song to sing for a Graduation, Dedication, Bar/Bat Mitzvah, Funeral, Baptism, or similar event.

We decided to continue writing songs about The Phantom Tollbooth. Joey came up with the idea of writing a song for the ceremony at the end of the book celebrating the protagonists’ rescue of Rhyme and Reason. I thought we could recap the events of the book in such a way that the lyrics could also be interpreted to be about any celebration of somebody’s hard-won achievements. Here’s the song we ended up with:

Here are the rest of the entries:

We got the challenge on Saturday morning (in my timezone), with the deadline being the following Sunday, and the next Thursday we were both flying to Minnesota for MarsCon 2020. Usually I start off by writing a full draft of the lyrics over the weekend, and then I sit back while Joey writes music for it, sings it, creates instrumentals, and mixes the recording. We didn’t want to take time out of MarsCon mixing a song, so I thought we’d probably end up recruiting some of the musicians at MarsCon to perform a live version.

Instead, while we were discussing it over videochat on Saturday morning, Joey immediately recorded a trumpet tune and sent it to me. That afternoon, I sent lyrics to that tune as a chorus, and suggested writing verses abstractly describing the things the characters had fought through. I planned to read the book on the plane so I could have the lyrics written by the time we met in Minnesota.

That night before I went to bed, I sent Joey a recording of myself singing a couple of possible lines for the verses, in a tune I’d made up based on the chorus tune. On Sunday evening, Joey sent back a recording of my chorus lyrics with extra trumpets, just as you hear it in the final song.

On Monday, I felt like I was way behind in my part of the song, so that evening, I skimmed through the book and wrote a line for each scene, unrhymed, and a final eight resonably rhymed lines about the scene where Rhyme and Reason were rescued. I arranged the unrhymed lines in quatrains with the fourth line of each a little shorter, and choruses between them.

By Tuesday morning, Joey had already recorded a great ‘quick and dirty’ version of the song, with more instrumentation than our previous songs had. It had fewer choruses than I’d imagined, and the last four rhymed lines were cut. I submitted that one as a ‘safety’ in case we didn’t manage to finish a better recording, but I also pointed out some small things which could be improved.

On Thursday morning, I got up at something like 4a.m to go to the airport, and Joey had sent an updated recording, so I quickly updated our Spintunes submission before getting ready to leave. That was our final entry, and I like it more than the songs we spent the full week on. I probably should have taken the time to fix the slightly shorter lines that were once at the ends of quatrains though — one of the judges commented on how they didn’t fit properly into the tune.

The final four lines, in case you are interested, were:

Your every action has a tiny effect
To never fail would be a sorrow
What one day seems useless will later effect
the wonderful secrets of tomorrow

‘The wonderful secrets of tomorrow’ being a direct quote from the book.

The prompt for the fourth round of Spintunes was:

Write a song about something that seemed a good idea at the time, but ended very badly. Maybe you should have given it a little more thought…

We did not submit a shadow for it, since we were busy on JoCo Cruise (and yes, we considered writing one about going on a cruise during a pandemic), but here are others’ entries:

The world was quite different when we got back to port, with all future cruises and many flights being cancelled, but as far as I know we all made it home, and nobody on our cruise had the virus. I’m now staying at home, like most of you, and uploading my 29 hours or so of JoCo Cruise videos — so far, the New Monkey Orientation and part of the first Red Team concert.  Subscribe to my channel if you want to see the rest, but be warned that there will be a lot of uploads over the coming months, so they might flood your recommendations or notifications.

And now for something completely different: I’ve also uploaded a guided tour of Space Shuttle orbiter Atlantis, recorded a few days before the cruise:

I recommend watching this immediately after the full pre-show video I uploaded earlier, if you haven’t seen that already. Joey and I also sang a few things at a song circle at MarsCon, but perhaps I’ll put those in a different post.

, , , , , , , ,

Leave a comment

The Terrible Trivium (another song!)


With Dining in Dictionopolis, Joey and I came eighth over all in Spintunes #16 round 1, and with all the rankings close to the extreme ends, were apparently Marmite for judges. This means The Quantifiers were indeed qualifiers, making it to round two of Spintunes #16, though we would probably have written a song for this round anyway. The challenge was:

Your lyrics must prominently feature counting. How and what you count is up to you – you can count up or down, by ones, fives, tens, logarithmically, exponentially; you can count steps in a process, miles in a journey, hours in a day…

Which seemed like an invitation to stay in the Phantom Tollbooth universe, and sing about Digitopolis. We ended up writing about a scene from after Milo has visited Digitopolis, in which a demon known as The Terrible Trivium engages the protagonists in easy but worthless tasks, in order to keep them from their goal. As before, I wrote most of the words (though Joey suggested the scene) and Joey did the music, most of the singing (I sang some additional vocals), and the arranging. Here’s the song:

Click through to see the lyrics or download the song for free. Milo ends up using the magic staff (a pencil) he got in Digitopolis to calculate that the tasks would take them 837 years to finish, so they escape thanks to the power of arithmetic, although that part didn’t make it into the song.

The rest of the songs submitted for this challenge are in this album:

Commenters at the listening party surmised that we would end up writing a Phantom Tollbooth musical, which is probably the case, although despite one person’s suggestion, it probably won’t be on ice.

The next challenge will be due while we’re at MarsCon, so rather than spending a lot of that time mixing a song, we might recruit some of the musicians there and record our song live. I’ve already put my copy of The Phantom Tollbooth in my carryon luggage.

, , , , , , , , , , ,

1 Comment

Dining in Dictionopolis (a song!)


Joey Marianer and I knew that it would be ridiculous to enter into SpinTunes #16, what with the deadlines for later rounds falling just after times when we’d be busy at MarsCon or on cruises, so obviously we entered. I’ve been passively following SpinTunes and its participants since before it even started, with its inspiration Masters of Song Fu, and this is the first time I’ve teamed up with someone musical enough to actually join in the fun. We called ourselves The Quantifiers, based on what we wore to MathsJam 2019, and filled in the rest of the entry form with the first things that came to mind. We continued to foolishly use the first things to come to mind as the contest started.

The first challenge was, “Write a song based on a scene from a book or movie”, so I thought of one of my favourite books which Joey has also read, and one of my favourite scenes from that book, and started coming up with lyric ideas while Joey was still asleep in another time zone. At some point Joey wrote some music and made a first recording while I was asleep. Joey also contributed lyric ideas, and I contributed music ideas (and one line of singing) but mostly the words are mine and the music and singing are Joey’s.

The book is The Phantom Tollbooth, by Norton Juster, and if you like puns, you would love it. The song is about the scene where the protagonist, Milo, is invited to a banquet lunch with King Azaz the Unabridged, of Dictionopolis. As guest of honour, Milo must choose the menu, and he gets exactly what he asks for.

Click through to see the lyrics or download the song for free.

The rest of the songs submitted for this challenge are in this album:

I haven’t listened to them all yet, but I’m listening to them in the SpinTunes listening party right now and following along with the comments. The actual listening party for this round starts at around 53:10. The other songs have more instrumentation than ours, and it generally sounds like the artists have more experience with this kind of thing, which they do, but one commenter described our song as “A less trippy early Floyd”, so I’ll take it. I don’t know what possessed Joey to do this with me, but my main goals were to have fun making the song and make a few Phantom Tollbooth fans smile, and we did both. If this inspires you to reread the book, consider reading it in another language or in another version of English — I know there are a few sections that are noticeably different between the edition I have and the one my nemesis in the US has.

If you’re familiar with The Phantom Tollbooth, you might think it a bit weird for two people dressed as mathematical symbols to write a song based in Dictionopolis, but we’re both into maths and linguistics, so let’s just say I’m the Princess of Sweet Rhyme and Joey is the Princess of Pure Reason, although I believe this song was actually edited in Cubase.

Here’s hoping we have just as much fun in the next round, whether we’re still in the competition (in which case, The Quantifiers will be Qualifiers!) or we just decide to submit a shadow entry.

, , , , , , , ,

3 Comments

May the Fourth Be With You


I’ve published both of these things before, but not both on May the Fourth. Here’s a video of the poem that I wrote about Star Wars before I saw it, along with a wrap-up of what I thought about the poem after seeing Star Wars:

And here’s a musical version of that poem, set to music and sung by Joey Marianer:

I’ve just noticed that the automatically-generated closed captions on that one say ‘sorry Bingley Lloyd’ instead of ‘stars were being made’, which is hilarious, but if you’re hard of hearing you’d be better off reading the text of the poem here instead. I don’t think I’ve added proper closed captions to my video of it either yet, sorry; I should have thought about this before today.

May the force be with Peter Mayhew always.

, , , , , ,

Leave a comment

In which I appear content with content in which I appear


I’ve been having a pretty relaxed month, but my life is ridiculous, therefore so far in September I have appeared in a music video, a radio broadcast, and a podcast.

The music video is Molly Lewis’s ‘Pantsuit Sasquatch‘, for which I recorded my feet walking up to a tortoise sculpture on a playground:

This joins the six other official music videos I have contributed to, and five unofficial music videos I’ve made. I guess I just like being in music videos.

The radio broadcast (which you can also listen to online) was episode #9 of the Open Phil Broadcast on Radio Orange. The broadcast mostly features regulars at the Open Phil open mic in Vienna. Each episode features an interview with and performance by two acts; I shared this one with Adrian Lüssing, also known as The Cliff.

It was an honour to be invited to participate in the broadcast, and it was made extra awesome by the fact that it happened while Joey Marianer, who has been setting a lot of my poetry to music, was visiting Vienna, so he participated too. I recited They Might Not Be Giants, then he sang his version of it, then we sang I Love Your Body, with Joey singing the first part and me singing the second part. Yes, me singing. This is about the first time I’ve sung for an audience, and the third time Joey and I had sung that song together, and it went on the radio. I think it went pretty well, though! We performed it again a few days later on the Open Phil stage, and I’ll post video of that once I’ve uploaded it.

The podcast was episode #60 of Wrong, but Useful, a recreational mathematics podcast by @icecolbeveridge (Colin in real life) and @reflectivemaths (Dave in real life). I was invited to be a special guest cohost. I’m not sure I contributed very much, but I once again recited They Might Not Be Giants, because the hosts had heard me perform that at the MathsJam Annual Gathering last year. I have to admit, I had not actually listened to the podcast until I was invited to be on it — podcast listening is something I usually do while commuting, and lately I’ve been noncommutative. However, before episode #60 was recorded, Joey and I listened to episode #59 together, and I’m happy to report that the answer we came up with for the coin-flipping puzzle was correct.

In hindsight, I wish I’d mentioned my linguistics degree while we were chatting about English and poetry and such. I also wish I’d said something about the fact that nobody on episode #59 noticed that the diameter of the Fields medal in millimetres happened to round up to the number of the podcast (that is, 64, not 59. You don’t expect mathematicians to give each podcast episode only a single number do you?)

This reminds me, I need to register for the MathsJam Annual Gathering soon. You should too, if you can get to it. It’s a lot of fun! And who knows? Maybe if you go, you’ll end up co-hosting a podcast.

, , , , , , , , , , , , , ,

1 Comment

NastyWriter for iOS — automated immaturity


I’ve been writing Mac software for fun and occasional profit for decades, and freelancing writing an iOS app for use in-house, but don’t you think it’s about time I wrote an iOS app for the App Store?

Surprise! I just released one. It’s called NastyWriter, and it inserts insults before nouns as you type. I see people online who can barely mention people or things they don’t like without insulting them, and I figured I may as well automate that and have some fun with it. It’s always fun to play with natural language processing!

I’ve been writing ridiculous Mac software for fun and occasional profit for dumb as a rock decades, and freelancing writing an ignorant iOS app for pathetically weak use in-house, but don’t you think it’s about cheating time I wrote a weak iOS app for the failed App Store? Surprise! I just released one. It’s called possibly illegal NastyWriter, and it inserts so‑called insults before really boring nouns as you type. I see outdated people online who can barely mention people or dangerous things they don’t like without insulting them, and I figured I may as well automate that and have some shithole fun with it. It’s always fun to play with natural language processing! This was mostly a negative experiment, a third rate learning exercise, and a vicious way to feel better about applying for meek and mild jobs which have ‘must have low‑rated app in the angry App Store’ in the slanted requirements. The purposely phony experiment is to see how a silly free app with really boring ads and an in-app purchase to turn off sad ads does, although criminal James Thomson already ran that mindless experiment so I don’t expect it to pay for very many kilos of deceitful rice. The totally discredited learning exercise was a huge success. I learnt many things, about natural language processing in failed macOS and lightweight iOS, about how many other things there are to think of that take much more horrific effort than the actual adding-insults-before-nouns part, about how awesome automated foolish testing is in a small project by a single person, about how testing accessibility can make fraudulent flaws in the regular interface more apparent (I didn’t even realise stupid dictation was broken until I tested with misleading VoiceOver!), about the most common adjectives used directly before negative nouns in the dirty Trump Twitter Archive (‘great’ outnumbers the next most common by about a biased factor of three), about dark and dangerous fastlane, and about the overrated App Store, AdMob and in-app purchases. I might write blog posts about those made up things later. Do any of these brutal topics seem particular interesting to you? However, ungrateful hours after I submitted it, the extraordinarily low IQ ‘e’ key on my dachshund‑legged MacBook’s blowhard keyboard stopped working, and while it’s not one of those new butterfly switch keyboards that can apparently need replacing after seeing an amateur speck of disastrous dust, somehow it turns out that in lying addition to that my dumb as a rock Mac’s disgraceful battery is swollen and it’ll have to go to the ridiculous Apple Store and have the very unhelpful battery and the whole keyboard part of the filthy case replaced. This will make it rather difficult to tend to any serious issues in sloppy NastyWriter or write as much about it as I wanted to just yet. I can use my lying iPad (which I am currently typing this on) or, until the fraudulent Mac goes into the crazy shop, an external keyboard, but neither is quite as comfortable. Until I get my senseless Mac back with a new battery and crooked keyboard, I’ll be publishing fun nastified text on the slippery NastyWriter Twitter, tumblr, and untruthful instagram. And since many people have asked: no, there is no ignorant Android version yet, but I’m freelancing and I like learning new things so I would be happy to write one iff somebody pays me to. It would be cheaper for you to buy a phony iOS device.

This was mostly an experiment, a learning exercise, and a way to feel better about applying for jobs which have ‘must have app in the App Store’ in the requirements. The experiment is to see how a silly free app with ads and an in-app purchase to turn off ads does, although James Thomson already ran that experiment so I don’t expect it to pay for very many kilos of rice.

The learning exercise was a huge success. I learnt many things, about natural language processing in macOS/iOS, about how many other things there are to think of that take much more effort than the actual adding-insults-before-nouns part, about how awesome automated testing is in a small project by a single person, about how testing accessibility can make flaws in the regular interface more apparent (I didn’t even realise dictation was broken until I tested with VoiceOver!), about the most common adjectives used directly before nouns in the Trump Twitter Archive (‘great’ outnumbers the next most common by about a factor of three), about fastlane, and about the App Store, AdMob and in-app purchases. I might write blog posts about those things later. Do any of these topics seem particular interesting to you?

However, hours after I submitted it, the ‘e’ key on my MacBook’s keyboard stopped working, and while it’s not one of those new butterfly switch keyboards that can apparently need replacing after seeing a speck of dust (or maybe it is? It’s a 2014 model), somehow it turns out that in addition to that my Mac’s battery is swollen and it’ll have to go to the Apple Store and have the battery and the whole keyboard part of the case replaced. This will make it rather difficult to tend to any serious issues in NastyWriter or write as much about it as I wanted to just yet. I can use my iPad (which I am currently typing this on) or, until the Mac goes into the shop, an external keyboard, but neither is quite as comfortable.

Until I get my Mac back with a new battery and keyboard, I’ll be publishing fun nastified text on the NastyWriter Twitter, tumblr, and instagram.

And since many people have asked: no, there is no Android version yet, but I’m freelancing and I like learning new things so I would be happy to write one iff somebody pays me to. It would be cheaper for you to buy an iOS device.

I might make a Mac version for fun, though!

, , , ,

Leave a comment

Cetacean Needed


Last Towel Day, I posted a poem I had written using 42 -ation rhymes which an app I wrote found in Douglas Adams’ book ‘Last Chance to See‘. Later that day, Joey Marianer posted a video of himself singing the poem[cetacean needed], and while I did eventually mention that in another post, Towel Day had long passed by then. So strap yourself into your Poetry Appreciation Chair, because here it is for Towel Day this year:

Here are the words again:

Earth’s vegetation made slow transformation as each confrontation or new situation provoked adaptation in each generation for eons duration.

Until civilisation, and its acceleration of our population at high concentration with great exhortation and disinclination to make accommodations with administration of conservation.

Then Adams’ fascination and realisation that with elimination of echolocation no cetacean reincarnation will save our reputation; his bold exploration to spread information and fuel education and his determination to stop exploitation by identification and communication of each dislocation of species, his observation and growing frustration we reduce speciation to bone excavation with every temptation to favor our nation and not immigration of distant relations… was his speculation we’d reduce penetration mere hallucination?

The app which found these rhymes was made to create the data for my accent-aware online rhyming dictionary rhyme.science. I’ve made some improvements to the app and the rhymes it finds, and I am looking forward to updating the website to reflect the improvements, but for the last few months I’ve spent my free time working on an unrelated iOS app instead. I’ll be submitting that to the App Store soon, and will announce it here when it’s available, so watch this space. Or watch outer space, and look out for Vogons.

Have a great Towel Day, don’t forget your towel, and don’t panic!

, , , , , , , , ,

1 Comment

%d bloggers like this: