Inadvisable Relationship Chatbot (WIP Post #1)

There is a joke conference at MIT CSAIL called SIGTBD (actually many other schools have something similar, in particular we organized with CMU’s when figuring out how to switch to virtual).

A long time ago in the pre-COVID days, aka in 2019, I made a “submission” to SIGTBD, which I did over the course of about 24 hrs.

This is the abstract

Many graduate students struggle to deal emotionally with daily life stresses, ranging from overdue problem sets to not dying at sea. Additionally, computer scientists may feel more comfortable typing at a screen than engaging in human contact. Although therapy chatbots exist and in fact have reached millions of smartphone users, they run on remote servers, creating privacy concerns. In this work, we propose that a local chatbot can also provide useful advice and can reach the vulnerable sub-population of computer science grad students. We create InadvisableRelationshipBot (IRBot) using high-quality online commentary from

And the PDF here:

That was created around the time GPT was coming out in libraries. So I wanted to update the chatbot with the latest machine learning goodness, since I remember being kind of disappointed with the non-intelligible output of the chatbot. Now I realize that’s part of the funniness.

So here is my quick one-day attempt at improving the chatbot (most of which was spent scraping reddit T^T). Left is previous chatbot, right is the one I made this weekend.

Left: 2019 chatbot, 25k rows data. Right: 2022 chatbot (gpt), 5k rows data.

The chatbot has way more reasonable responses, but far less funny. So I’ll have to spend some time tweaking that.


At a high level, I combined a colab notebook for finetuning the DialoGPT model (on RickAndMorty dialog). The DialoGPT is made by Microsoft and can be found in the hugging-face transformers library.

The transformers library allows for easy finetuning. So, we take the default DialoGPT model (which is available in three sizes) and apply it to /r/ relationships data. Here is a comparison of the DialoGPT trained on all of reddit vs the AdviceBot which takes that model and trains it further on just /r/relationships.

I’m not sure what the bot freaking out with 1!??!! is about. Will have to find some NLP person to ask.


I was super grateful for the methods section / time I put into documenting this in 2019. This time around I used PRAW and pulled the 200 hot posts in the past year (the top voted posts which tend to be “updates” not the Q&A I want). Deciding how I wanted to structure the data and how to clean and sort posts consumed most of my brainpower T^T. e.g.

  • remove posts with “update” in title
  • excise only the text after tl;dr
  • don’t use the top-all time posts, as those will be mostly updates
  • use the reply-to-replies to create more “dialog” like the rick and morty captions

In the end I used about 600 rows, giving the results you see above. Not bad. The other model ins 2019 was trained on 25k rows, but if you go by that metric, the DialoGPT I finetuned was first trained on 147M conversations. And finetuning only took <10 minutes on free google TPU compute.

(I’m also curious how well a Markov chain would do.).

Some of the finagling to get a “person A- person B” style. However! Since the rick and morty dialogs are by consistent people, so the chatbot develops a distinctive style. But here it’s a ton of different people in different styles contributing the dialog. So less distinctive.

Training data from /r/relationships.
On the left: what the chatbot should respond with. On the right: what the user said beforehand.

Some other funny links. – Markov chain trained on the King James Bible and SICP (Structure and Interpretation of Computer Programs)

37:29 The righteous shall inherit the land, and leave it for an inheritance unto the children of Gad according to the number of steps that is linear in b. – GPT2 on Ted Talks – subreddit of humans pretending to be robots pretending to be humans

WARNING many posts NSFW (nsfl?) – but here is a subreddit by someone training a 1.5gb model of GPT2 on 500k posts. Pretty darn coherent x__x See more details about the subreddit simulator here:


I did learn that despite all the hype about GPT etc. chatbots are nowhere near realistic… generating one-off text, or single-line replies, maybe. But the whole statelessness of GPT, and you deal with it by just appending the previous text and feeding the entire thing through the model…

It’s cool to see that more specific conversational AI is trained on data that is separated by a “personality” hierarchy. (Persona by Facebook)

Also I’m essentially using this as a Q&A bot and not treating it as something with state. So that might be a fork in the project: One where you sort of chat through your problems with a friend. And another which is well, seeking feedback from the collective internet.

But my immediate next step is to just have it generate closer to three lines, for increased hilarity. As well as make it more fun to interact with (vs re-run a notebook on collab every ten exchanges).

Final Funny Exchanges

oh my

PoV Yoyo Project Rebooted: (WIP Post #2) How fast does a yoyo spin?

Decided to measure empirically. Turns out if you just drop it, very slowly!

Note: daylight was critical for slow mo video.

First off, rotational inertia was not a negligible component as it turns out. If you drop something side-by-side with the yoyo, the yo-yo falls suppppeerrr slowly. Here is my awesome roomie showing this (using samsung galaxy s9 slow-mo — online says it should be 240 fps)

Look how slow it is compared to a battery!

Second, I think the yoyo only spins about 60 rpm!!! Maybe it can be faster when thrown. But wow, my back-of-envelope calculations was an order of magnitude off. Hah!

The yoyo will be spinning fastest near the bottom of the drop. So we can take this clip and run it through ffmpeg to split out individual frames, then simply count the number of frames per revolution.

ffmpeg -i yoyo_drop_bottom.mp4 -r 30 output_%04d.png

From there I count time between spins as

9 frames
9 frames
8.5 frames
8.5 frames
8.5 frames
8.5 frames
8 frames
8.5 frames
8 frames

Now if the video is originally at 240 fps (no way to confirm this for now), and now I am outputting images at 30 fps. Then, I think each frame is covering a longer chunk of time. That is, of every 8 frames at 240 fps I only select 1 frame to get 30 fps. So 8 frames of 30 fps = 64 frames of 240 fps.

Thus we know that each revolution took 0.26 seconds. (60 sec/min / 0.26 sec/revolution) = 231 rpm. Way less than 3000 rpm!

Will have to repeat in daylight to see what the rpm is like thrown.

I’m still not certain I believe in 300 rpm either, mostly as I’m not sure how much to trust the 240fps spec and my math. Next step I guess is I should do the trivial physics problem… but probably I’ll put work toward making the yoyo happen instead.

(The yoyo is from when I took MIT 2.008. We wanted to make a yoyo that whistled as a spun, this feature didn’t work, but we still had a nice ambigram).

Pandemic Diary #74 – Catching and Un-catching COVID, aka My $350 Lesson in Conditional Probability

(i’ll sketch out this post and update it as i find the time) — actually will just post in two parts jeez how can i write this much — so —

part 1

i caught covid holy f*k

…or so i thought

amusingly i am working through 6.041 on mit ocw which is the undergrad probability class, and like a perfect complement to the lectures I got a real life lesson on conditional probability

(wrote a section on covid testing terms if you’re unfamiliar)

testing positive

flew home to GA for new years, omciron had already picked up quite a bit and so I wore a proper N95 (i actually didn’t have a KN95 / N95 for the malta international trip) and returned to superstitious practices, in this case a new one, mouthwash. still not eating on planes

got a quickvue test via kaiser permanente drive-through testing appointment that had to turn people away. tested negative on 1/1 (three days after my flight)

had indoor dining in georgia 1/1 (begrudgingly) and freaked myself tf out. had 5-6 ft spacing, 20% capacity (in a place that seats 200 so v. spacious), my table was 5 people, all double and most triple-vaxxed. (i am not sure why it’s more impolite to turn down indoor dining than invite someone in the first place without considering their safety budget…)

dim sum was v. tasty though

i think because of my grudge about being coerced to go (and also… the inviter mentioning having caught covid and having eaten indoors 3-4 times before and not feeling the need to get a booster right away), i was convinced i’d catch omicron because of the dining, like it just felt karma was waiting to get to me after i did my trip to malta and somehow didn’t get COVID (to my parents’ credit they did try to change it to outdoor dining)

i had trouble getting a test in GA (only checked CVS though), and the PCR tents were closed for the holidays, and eventually 4 days passed without anyone having symptoms, so i just flew to boston and planned to quarantine for 5 days and get tested on days 3 and day 5 as before, and also tried to source a day 0 test since it was 5 days out from dining.

roomie also had a sore throat but hadn’t been able to get tested (waiting in rain for half an hour outside a clinic), so great, i could get tests for both of us. i called walgreens close to target (where i was going to curbside pick up some cough drops for him, since i wasn’t sure about spreading airport germs). they had in stock and i picked up four flowflexes with no issue. roommate’s came back negative!

…mine came back positive, wtf??

i felt perfectly fine (zero symptoms) but from what i understood the test was very accurate (cue conditional probability lesson i’ll explain later), like i kept hearing false positive rate of 0.05% so i thought the likelihood i had covid, given a positive antigen and the amount of risk i’d taken over the past week (mostly the indoor restaurant dining and two flights), to be pretty high

i actually thought that, since antigen tests were more likely to give false negatives than false positives, that getting a positive despite having no symptoms meant it was extra likely i had (asymptomatic) covid, even more so than if i had symptoms and had a positive antigen– turns out it’s the exact opposite!


i freaked the fk out and shut myself in my room right away and asked everyone to mask up around me, and i avoided leaving my room and just went to sleep.

took a second test the next day to convince myself before i really committed to the 10 days of isolation and trying to not spread it to my roomies, and all my roomies had to deal with being a close contact also

also positive, fk me, at this point i started notifying everyone i’d interacted with as i was certain i had covid (certain enough i wasn’t even going to figure out harvard pcr test for confirmation)

which, aside, still have no idea how to notify airlines – i think that comes through dr and department of health? –

presumably you contact a dr and they deal with everything – now i know i guess you can call insurance hotline ! and they will give you to talk to a nurse remotely (nurse hotline) same day

ok i have some trust issues with insurance and the health system, exacerbated by covid-caused instability in living situation, but anyway

=== notes ===

if you haven’t had to do a lot of home testing for covid (e.g. if you are home all the time or get tested regularly for school), here are some terms for you

  • Antigen test – there are the rapid tests (results in ~15 mins) that are available in drug stores (Walgreens, CVS) for ~$10 to $15
    • QuickVue, BinaxNow, and Flowflex are the ones I’ve seen in the United States
  • PCR test – this almost always refers to RT-PCR, reverse-transcriptase polymerase chain reaction. these are the tests that need to be sent to a lab and take 1-3 days for results
  • Rarer, there’s the isothermal PCR tests, which include Cue tests that Google hands out to its employees, where you take self-sample like the antigen tests but it goes into a cartridge that runs PCR and reports results after 20-30 mins. these are a little more accurate (prolly) than the antigen tests
  • (Traditional PCR goes through multiple heat-cooling cycles, so I imagine the isothermal part is what saves the time)
  • Both these are Nucleic Acid Amplication Tests (NAATs), for more see CDC website. The layman’s explanation I came up with for traditional PCR (maybe similar for isothermal PCR?) is you have a search keyword (a primer). if the keyword (nucleic acid) is found, the heat-cooling cycle makes a bunch of copies of it (amplication), which you can then detect more easily (test)
  • Anterior nasal swab – the PCR and Antigen tests can both use this method which is easy to perform by yourself at home. it’s where you shove a qtip or other stick maybe an inch into your nose and kinda rub it in circles and then (using same stick) do the same for other nostril, which i would’ve found so grosss but now it’s normalized
    • (other sampled areas include back of throat, or way up in the back of the nose which was what people complained about at the beginning of the pandemic ALMOST TWO YEARS AGO).
  • Color kit – both MIT and Harvard use these for PCR testing. you self-sample and return to contactless drop off in collection bins around campus. each kit has a barcode which you link to your harvard/mit id online, and results are reported on the color website

COVID facts

  • Original strain: 5-9 days to infectious/symptoms
  • Delta: 3-5 days to infections/symptoms
  • Omicron: 1-3 days to infections/symptoms
  • Generally can be infectious two days before any symptoms, when you feel healthy!
  • Asymptomatic = no symptoms, indistinguishable initially from pre-symptomatic
  • Isolation: stay inside your room, you’re infectious
  • Quarantine: stay inside your house just in case
  • Generally now considered that 2-dose vaccination still very effective for preventing severe illness
  • – so-called “break-through infections”
  • Booster shots now recommended if you’re 6+ months out
  • United States topped 1 million cases a day thanks to omicron
  • In the US, more people died in 2021 than 2020 from COVID
  • In MA, hospitalizations exceed the peak of last year’s winter surge

For a while it really seemed like Delta might be the last wave, the numbers didn’t go up nearly as much as last year even after all the travelling for thanksgiving. And for a while it was unclear if omicron would actually behave very differently if a large portion of the population was vaccinated. but it sure did !! booster shots have become mandatory for returning to school for Harvard and MIT