Pandemic Diary #76 – hot pot at home

  • buy induction stove ~$40
  • pot
  • scoop ladle (ideally draining one)
  • 4 plates (for cooling stuff) – you want to scoop out the meat before it cooks into sadness, and the noodles can dissolve, as well as the veggies
  • 4 bowls (for individual dipping sauce bowls)
  • communal chopsticks for dipping into the pot

ingredients

  • vermicelli noodles
  • enoki and oyster mushrooms
  • scallions (for dipping sauce)
  • napa cabbage, chinese … veggies of some kind, you want thin stuff that cooks fast
  • lotus root, so tasty
  • fish balls
  • hot pot base seasoning (can also just use chicken boullion cubes)
  • dried mushrooms and kelp
  • tofu poof balls (don’t get the other tofu thing, was gross)
  • Lots of thinly sliced meat (available from hmart) ~ 3 to 4 packs
wash and chop to prep
display nicely
prep dipping sauce – I put random stuff: chili sauce, sesame oil, garlic, scallions, soy sauce, some vinegar, then just mix

enjoy with friends!

enjoy with friends! can put raw ingredients on side table
at the end you realize how many ingredients you still and try to cook it all for storing

Note:

Seems like 99 ranch market is on instacart, may actually be cheaper delivered than going to hmart in person !

todo

  • fix bike charger box
  • hydroponics

compare to new years at home

of course cannot compare

I guess it is fun to cook as a shared activity / chore

Pandemic Diary #75 – Omicron’s wave, free kits, and omicron variant + lunar new years, snow storm, work reflections, chinese studies, and nature cams

quick post (edit: rip became long) just because haven’t blogged in while, but am kind of busy

pandemic deaths are at an all-time high, but omicron wave is dying down (per MA wastewater levels). there is however another omicron (variant) which is apparently just as infectious.

so I guess the universities can be right about not closing back down. the omicron wave died down before school really started. (the harvard and mit dashboards are designed quite well).

learned that cuba made 5 of its own vaccines, unlike other developing countries it has good biotech sector and its own vaccine manufacturing.

free kits – now every household gets 4 free kits. and most insurance will reimburse self-bought at-home kits. planet money brought up the good point that a benefit of this government policy (even if it’s not enough for some households) is this ensures more of a steady demand for kit manufacturers. this way there isn’t an awkward lag when there’s a surge, where people literally cannot find tests.

a lot of big companies also offering free rapid tests for their employees

happy lunar new years!

got decorations off of amazon o__o seemed lower effort way to celebrate than dumplings / hot pot (dangerous website: yamibuy.com)

lasercut felt, and the weird circles are lanterns! have to pull apart
*asian-ness intensifies*

never decorated like this before in the USA, low stakes since no one comes up the stairs except us

Of course every year there is the big spring gala, my roomie pointed out we don’t really have equivalent for US. But to be fair US is mixing pot, and Chinese community in each city has small versions of this too!

中央电视台春节联欢晚会 2022 Part 1/4

cat!

my roomie got a lion costume and lunar new years costume for her cats!!! went about as well as you might expect haha

The costume without legs went over a bit better

daily life tidbits

miss clubbing (the maybe 3 times I went). miss just easily grabbing coffee with people. not interrogating everyone about their covid risks. being able to lick doorknobs with abandon. many other things i took for granted. see handwritten diary for spicier details lol

i should work on enunciation. some exercises I bookmarked just now.

work stuff

generally trying to be appreciative of all the opportunities I do have instead of thinking about how life could be better. hey. i’m surrounded by great friends and roommates. it’d be nice to be paid and valued more for my work, but it’s still really cool I get easy access to a lot of world-famous researchers. i may miss not being jaded, but life is pretty relaxing right now. i may long to be more of ambitious / career ladder climber / dream big / push my limits. but maybe at such a time i’ll look back with longing at my relaxed lifestyle now. i may wish i lived in a place more infused with startup culture. but i have a pretty great STEM network compared to may others if I supply the enthusiasm. i may wish my work had a clearer real-life impact. but it’s still worth the attempt.

wait, work stuff.

I am part-time interning at Scotiabank. https://sloanreview.mit.edu/article/catching-up-fast-by-driving-value-from-ai/ But not on any of the projects mentioned. I guess it really is a huge bank!

yea. i realized that i may not work in an NLP lab, but that should hardly stop me !! learn online like others. As I work on more production stuff. I have found this lately that I am going through / seems useful.

http://fullstackdeeplearning.com/spring2021/lecture-7

The tl;dr of this

was working through 6.041, though got distracted. finished lec 8, starting on lec 9 next

fun stuff

great video, highly recommend watching the whole thing. has plot twists and everything. but the non-plot bits are: you can mix powdery snow with water with your hands to make it firmer. can use bread knife. long overhanging dinosaur neck supported with internal wooden stick inside!!. can use ikea boxes with car wax on inside to form cubes.

snow

almost two feet in some drifts eg our balcony.

Image
snow plows would come through and the cars on the side of street got even more wrecked

forecast

nws boston twitter is great. as is their weather forecast discussion. https://www.weather.gov/box/winter

The website gives probabilistic – so a 1/10 change and 9/10 chance. Cool to see. When the prediction was still 16” snow, there was only 80% likelihood >8” snow. forecasting seems hard overall. this was interesting:

Image

i learned there can be dry blizzards without snow. just the wind whipping up existing snow.

Image

I remember this person from 2015 storms.

Image

milk and bread meme

boston was so well prepared for this. governor was totally relaxed about it. market basket was open day of. no lines. next day totally normal access to everything.

background media

fruit bats are so big! and their wings look so fragile. they groom their wings too
priscilla (cat in residence) likes this video

chinese

working through ep 29 of

“CuteProgrammer 程序员那么可爱”

still going through MZDS, the original chinese novel form of the untamed.

to do: read about dumplings around the world!

https://www.bbc.com/zhongwen/simp/world-60067537

random internet

also, apparently one possibility for # of chinese restaurants in US. is that for chinese exclusion act there was an exception for business owners, which included restaurants. so restaurant owners could go to and from china when others could not. o__o per random youtube vid (mental floss channel) about chinese food in the US

also in vein of my interest in training not-dogs, birds trained to pick up litter. start with timed feeder so birds know food there. then put a bunch of litter, so when birds accidentally knock in they learn litter is food. then they tell other birds!

adventures?

soon, ice castles in new hampshire.

http://help.icecastles.com/en/collections/1504557-plan-your-trip

https://mitoc-trips.mit.edu/trips/?after=2021-02-01 – wow, so many people i know / haven’t talked to in years rip

Mt. sunapee is ~2 hr drive. but due to staffing shortages, a lot of lines…

perhaps yet more youtube to learn to ski? Some advice from friends:
“You just put all your weight on the big toe of your outer ski and you turn. Lift the inner ski slightly so it can rotate and stay parallel.” “If you are learning to ski, no need for a large mountain. McIntyre in Manchester is easy. Looking to get out? Rent snowshoes. Buy microspikes for hiking. Try Mount Cardigan. AMC has facilities there. That was the winter destination from Boston before mechanical lifts”

summary

wow my youtube consumption has increased drastically 0:

Inadvisable Relationship Chatbot (WIP Post #1)

There is a joke conference at MIT CSAIL called SIGTBD (actually many other schools have something similar, in particular we organized with CMU’s when figuring out how to switch to virtual).

A long time ago in the pre-COVID days, aka in 2019, I made a “submission” to SIGTBD, which I did over the course of about 24 hrs.

This is the abstract

Many graduate students struggle to deal emotionally with daily life stresses, ranging from overdue problem sets to not dying at sea. Additionally, computer scientists may feel more comfortable typing at a screen than engaging in human contact. Although therapy chatbots exist and in fact have reached millions of smartphone users, they run on remote servers, creating privacy concerns. In this work, we propose that a local chatbot can also provide useful advice and can reach the vulnerable sub-population of computer science grad students. We create InadvisableRelationshipBot (IRBot) using high-quality online commentary from www.reddit.com/r/relationships.

And the PDF here:

https://web.archive.org/web/20190819094028/http://sigtbd.csail.mit.edu/pubs/2019/nancy-irbot.pdf

That was created around the time GPT was coming out in libraries. So I wanted to update the chatbot with the latest machine learning goodness, since I remember being kind of disappointed with the non-intelligible output of the chatbot. Now I realize that’s part of the funniness.

So here is my quick one-day attempt at improving the chatbot (most of which was spent scraping reddit T^T). Left is previous chatbot, right is the one I made this weekend.

Left: 2019 chatbot, 25k rows data. Right: 2022 chatbot (gpt), 5k rows data.

The chatbot has way more reasonable responses, but far less funny. So I’ll have to spend some time tweaking that.

Methods

At a high level, I combined a colab notebook for finetuning the DialoGPT model (on RickAndMorty dialog). The DialoGPT is made by Microsoft and can be found in the hugging-face transformers library.

The transformers library allows for easy finetuning. So, we take the default DialoGPT model (which is available in three sizes) and apply it to /r/ relationships data. Here is a comparison of the DialoGPT trained on all of reddit vs the AdviceBot which takes that model and trains it further on just /r/relationships.

I’m not sure what the bot freaking out with 1!??!! is about. Will have to find some NLP person to ask.

Data

I was super grateful for the methods section / time I put into documenting this in 2019. This time around I used PRAW and pulled the 200 hot posts in the past year (the top voted posts which tend to be “updates” not the Q&A I want). Deciding how I wanted to structure the data and how to clean and sort posts consumed most of my brainpower T^T. e.g.

  • remove posts with “update” in title
  • excise only the text after tl;dr
  • don’t use the top-all time posts, as those will be mostly updates
  • use the reply-to-replies to create more “dialog” like the rick and morty captions

In the end I used about 600 rows, giving the results you see above. Not bad. The other model ins 2019 was trained on 25k rows, but if you go by that metric, the DialoGPT I finetuned was first trained on 147M conversations. And finetuning only took <10 minutes on free google TPU compute.

(I’m also curious how well a Markov chain would do.).

Some of the finagling to get a “person A- person B” style. However! Since the rick and morty dialogs are by consistent people, so the chatbot develops a distinctive style. But here it’s a ton of different people in different styles contributing the dialog. So less distinctive.

Training data from /r/relationships.
On the left: what the chatbot should respond with. On the right: what the user said beforehand.

Some other funny links.

https://kingjamesprogramming.tumblr.com/ – Markov chain trained on the King James Bible and SICP (Structure and Interpretation of Computer Programs)

37:29 The righteous shall inherit the land, and leave it for an inheritance unto the children of Gad according to the number of steps that is linear in b.

https://twitter.com/Hypo_Inspo – GPT2 on Ted Talks

https://www.reddit.com/r/totallynotrobots/comments/7x8zan/prototype/ – subreddit of humans pretending to be robots pretending to be humans

WARNING many posts NSFW (nsfl?) https://www.reddit.com/r/SubSimulatorGPT2/comments/sazjfy/23f_with_my_dad_56m_i_dont_know_if_we_have_a_good/ – but here is a subreddit by someone training a 1.5gb model of GPT2 on 500k posts. Pretty darn coherent x__x See more details about the subreddit simulator here:

Thoughts

I did learn that despite all the hype about GPT etc. chatbots are nowhere near realistic… generating one-off text, or single-line replies, maybe. But the whole statelessness of GPT, and you deal with it by just appending the previous text and feeding the entire thing through the model…

It’s cool to see that more specific conversational AI is trained on data that is separated by a “personality” hierarchy. (Persona by Facebook)

Also I’m essentially using this as a Q&A bot and not treating it as something with state. So that might be a fork in the project: One where you sort of chat through your problems with a friend. And another which is well, seeking feedback from the collective internet.

But my immediate next step is to just have it generate closer to three lines, for increased hilarity. As well as make it more fun to interact with (vs re-run a notebook on collab every ten exchanges).

Final Funny Exchanges

o__o
oh my

projects blog (nouyang)