Category Archives: Thoughtful

i made a knitted hat on a knitting machine! (some cursing involved)

I made a hat!
(note: this post not instructions, just a note on new tech i found. hasten ye to youtube for instructions if you want those)

the machine

I used my friend’s knitting machine. It exclusively makes circles/cylinders. I didn’t know anything about knitting, and I was able to make a hat in … well okay probably two hours. But there are people online making one in 10 minutes! I probably could make one in ten minutes with some practice. (also not including buying yarns).

crank away!

The yarn is threaded onto this machine and you crank the handle. There’s a little counter and you just count — 60 loops. Put on the next color (literally just snip the yarn with a long tail, put it inside, then thread on a circle with the second yarn).  Crank away again another 40 loops. Realize you don’t have enough yarn and change the colors again. Here is a video.

casting off (and cursing)

At the end, you cut the yarn and spin the machine one loop. This starts the removal process. Stop after one loop! In fact, go really slowly at the end since if you go further the yarn pops off entirely, and we need to catch them before they pop off (at this stage they can completely undo).

You take a needle and thread yarn through all the final knits at the top and pull them off the machine one loop at a time. This is where the cursing comes in, because if you accidentally pop one off the machine because you’re a clumsy ape, then it can start to slip through multiple rows of yarn. You have to stop this process and then carefully re-knit them one row at a time.

It’s surprisingly hard to follow individual yarn threads and find the tiny loop that you dropped. If my friend hadn’t been there I think multiple hours of youtube videos and a general disillusionment would have resulted from the casting off process.

cinch and add a pompom

Once the hat is cast off, you have a long tube. You cinch it tight at the top. At this point I also made a pom pom (see previous posts) on the spot by wrapping yarn around four fingers, taking that tail from the cinched off hat, and using that to cinch off the pom pom also.

Then, tie a square knot or two.

As a final act, we hide the thread. To do so, you scrunch the hat and thread the yarn through a few loops (in this specific pattern which follows existing yarns) and then when you un-scrunch you can poke the thread through into the center of the hat.

Hat!

finally you push one end of the tube up into the other half and you get a hat! (I also rolled up the brim in the pic below)

the end.

appendix

Notes on yarn

There’s a bit of trickery with the yarn, it has to be yarn 4 (?) and fit in the needles and also slide off of them well. This Sentro machine has 43 needles I think which come up one at a time and grab the thread being fed in and then back down. Here is the thread I used.

other more complex inspiration from youtube

We also looked up how people make patterns on their hats (other than just solid swathes of color). Seems like they just manually do so — loop one thread, then the next, then another color, etc.

Also, here’s an example of Fixing a stitch, which is what I did with a lot of cursing: https://youtu.be/VhtOs-5lwI4?feature=shared&t=341

Manually knitting over the original to put in a design (“duplicate stitch”)

Or you can stitch, then cut the yarn, then stitch the new color, then cut the yarn, etc. It’s kind of intense: https://youtu.be/JMV49F45xuQ?feature=shared&t=1278

Weezer – Undone — The Sweater Song

oh, it’s true, you can pull a single yarn and undo the whole thing. my friend provided this extremely sophisticated proof, the lyrics from this song:

“♪ If you want to destroy my sweater ♪ Pull this thread as I walk away ♪ As I walk away! ♪”

 

Future work — DIY automated ugly christmas sweater?

in order to really make an ugly christmas sweater though, we can’t just be doing tubes all day e’ery day. So to do that you need to have two linear machines that interleave with each other. And to do that you first have to have one linear machine…

So some research (from above friend) on this.

The commercial linear ones are around $250 — $500. www.amazon.com/Knitting-Machine-Stitches-Domestic-Accessories/dp/B09KG8X6XT

In terms of DIY — This is a circular one (perhaps among the best?)

https://www.printables.com/model/355228-circular-sock-knitting-machine-for-my-mom-and-you

But no linear DIY / OSHW ones exist. So, maybe tbd?

 

migrating to zsh! finally (or: even while my friends sleep, terminal cat keeps me company)

sneak preview

blah blah background

mac defaults to zsh, but i resisted changing over right away.

i had a nice dotfiles automation set up (with my terminal timers complete with a custom fun sound when the timer went off). using (someone else’s script) I could clone my dotfiles repo, run a single shell command, and have all my nice terminal customizations.

plus, i had a nice cat.

well, i have not used tt in a while anyway (i really miss working in linux — once i make a bit more, we’re going back to linux haha. though it’s neat the some of the LLMs can run on mac silicon) since it works so-so on my current macbook — mostly, on macbook you can’t force the terminal to stay on top so the timer gets buried anyway.

I also was really happy with autojump. but I had seen brief glimpses of my friends using zsh and it seemed to have really nice tab-completion.

so, zsh time it is!

okay i’m too lazy to write a true write up so i’ll just document what i did for my own reference.

install oh-my-zsh

i’m not sure why oh-my-zsh, but it seems nice. i think it includes a lot of powerful plugins (downloads and lets you one-line activate them).

https://ohmyz.sh

sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

out of the bat — git aliases!

i had that in my bashrc manually, but in oh-my-zsh there’s a one-liner.

alias ga='git add'

and already we can see in ~/.zshrc  that we can do

plugins=(git)

and these aliases are included  in oh-my-zsh with that one line!

other oh-my-zsh plugins i enabled:

plugins=(git gitignore 
magic-enter
colorize
colored-man-pages
python
)

and then let’s overwrite that (rip) — use random stranger’s plugin pack

okay but i was missing my autojump, and i realized that there are still some manual steps to install extra fancy plugins. i didn’t want to do some comprehensive understanding of competion, tab completion, autocompletion of commands vs files vs folders, frequency-based or history, etc. i just wanted someone to pick a few. and all the different themes, wow.

hence then I used

https://github.com/gustavohellwig/gh-zsh

which includes zsh-completions and zsh-autosuggestions which are not in oh-my-zsh. (completions is in oh-my-zsh).

sudo curl -fsSL https://raw.githubusercontent.com/gustavohellwig/gh-zsh/main/gh-zsh.sh | bash

powerlevel10k — no need for other themes

most importantly, it includes a customizable theme called powerlevel10k. This actually walks you through creating a custom prompt  in detail — everything from what information you want on the right or left, do you want a timestamp (if so in what format), do you want minimalist or maximalist (with all the icons! and colored backgrounds!), if you want a two line prompt or not, how many colors you want, etc.

it was really cool actually!

p10k configure

after all that I ended up with

╭─ ~/Documents/projects/python_fun on main ········· at 22:48:40
╰─❯

okay but we need cats: editing the p10k prompt

very importantly, i then spent a while figuring out how to add a cat back into my prompt.

thus

vi ~/.p10k.zsh

it turns out an example of adding to the prompt is given in this line, so lets uncomment it

typeset -g POWERLEVEL9K_RIGHT_PROMPT_ELEMENTS=( 
[...]
example # example user-defined segment (see prompt_example function below)

way far down we can define it

 # Example of a user-defined prompt segment. Function prompt_example will be called on every 
# prompt if `example` prompt segment is added to POWERLEVEL9K_LEFT_PROMPT_ELEMENTS or # POWERLEVEL9K_RIGHT_PROMPT_ELEMENTS. It displays an icon and orange text greeting th e user.
#
# Type `p10k help segment` for documentation and a more sophisticated example.
function prompt_example() {
# p10k segment -f 208 -i '⭐' -t 'hello, %n'
p10k segment -f 208 -i '<hi>' -t 'ฅ^•ﻌ•^ฅ'
}

So I put in a cat!  hurray! now on the right hand side my cat friend is back 🙂

this is what the cat looks like

 

╭─ ~/Documents/projects/python_fun on main ········ at 22:58:03
╰─❯ <hi> ฅ^•ﻌ•^ฅ

image: even when my friends are asleep … terminal cat keeps me company

(the <hi>  reminds me of old-school chat logs)

fix the python virtualenvironment display

okay, but what is up with the weird python enviroment display?

╭─ Py py311 ~/Documents/projects/python_fun on main 

There’s a random “Py” and the environment name is sort of just thrown in there, grr.

First off, I moved it from the right to the left prompt. It’s the virtualenv  not pyenv for reasons i don’t know yet.

typeset -g POWERLEVEL9K_LEFT_PROMPT_ELEMENTS=(
virtualenv
[...]
)

Then I set delimiters around the environment name

 typeset -g POWERLEVEL9K_VIRTUALENV_LEFT_DELIMITER='('
typeset -g POWERLEVEL9K_VIRTUALENV_RIGHT_DELIMITER=')'

I also think it’s handy to have the actual python version, so I’ll try it out, but instead of the “Py” which is super random I’ll use a snake instead.

typeset -g POWERLEVEL9K_VIRTUALENV_SHOW_PYTHON_VERSION=true 
typeset -g POWERLEVEL9K_VIRTUALENV_VISUAL_IDENTIFIER_EXPANSION='🐍

hurray!

there we have it — a zsh prompt I enjoy 🙂

At first I missed autojump, but after typing a few commands in manually, the completion has been great. Very pleased. will report more later (and I can always install autocomplete).

add oh-my-zsh plugins back

oh right — p10k overwrites your zsh, so we have to add them back in.

╰─❯ vi ~/.zshrc
export ZSH="$HOME/.oh-my-zsh"

[...]

plugins=(git gitignore
magic-enter
colorize
colored-man-pages
python
)

MAGIC_ENTER_GIT_COMMAND='git status -sb .'
MAGIC_ENTER_OTHER_COMMAND='ls -la .'
source $ZSH/oh-my-zsh.sh

explanation of magic-enter plugin config

so basically when you hit enter on an empty line, zsh will automatically put in this command depending on if you’re in or out of a git directory. (either git status or ls -la)

change git icons to be fancier

okay, this is just because I was really confused to have a ?1 in my prompt and i find it funny that the comment mentions exactly this. in ~/.p10k.sh

 # Branch icon. Set this parameter to '\UE0A0 ' for the popular Powerline branch icon.
# typeset -g POWERLEVEL9K_VCS_BRANCH_ICON=
typeset -g POWERLEVEL9K_VCS_BRANCH_ICON='\UE0A0 '

# Untracked files icon. It's really a question mark, your font isn't broken.
# Change the value of this parameter to show a different icon.
# typeset -g POWERLEVEL9K_VCS_UNTRACKED_ICON='?'
typeset -g POWERLEVEL9K_VCS_UNTRACKED_ICON='…'

all together now

this is the final result (for now)

note that there is an alias of “..” to “cd ..”,  which seems nice. Also, the very last line you c an see how nice the autocompletion is — I only have to type sou and i can tab complete the entire line.

random bugfix: locale

 I did change “C” to “en” in the following variables to fix an issue with showing examples for bash commands, where I would get a random text “failed to set default locale”.

here’s what i mean by showing examples, I can just type “tar” and hit tab and get this:

specifically I changed in my ~/.zshrc:

export LANG="en_US.UTF-8" 
export LC_ALL="en_US.UTF-8"

bonus: fancy-looking font: fira-code

someone in my past was very excited about ligatures and fancy fonts for coding. most specifically, this font turns !=  into what looks more like while you are typing !

 brew install font-fira-code

well, i don’t think that worked; in the end i downloaded the zip and unziped and opened the fonts.

then i went into terminal settings and set it to use font-fira 🙂

here are some examples of what it looks like. this is what i actually typed

╰─❯ echo 'Example fira-code ! and = is != , > and = >=, = = == => = and = ==, 0 o O lL1

and it shows up like so
here is a video of how the ligatures are rendered on the fly:

 

 

all together now

for reference:

.zshrc

export ZSH="$HOME/.oh-my-zsh" 

[...]

plugins=(git gitignore
magic-enter
colorize
colored-man-pages
python
)

export LANG="en_US.UTF-8"
export LC_ALL="en_US.UTF-8"

MAGIC_ENTER_GIT_COMMAND='git status -sb .'
MAGIC_ENTER_OTHER_COMMAND='ls -la .'
source $ZSH/oh-my-zsh.sh

and  edits to .p10k.zsh

typeset -g POWERLEVEL9K_LEFT_PROMPT_ELEMENTS=(
virtualenv
[...]
)

typeset -g POWERLEVEL9K_RIGHT_PROMPT_ELEMENTS=( 
[...]
example
)
function prompt_example() {
p10k segment -f 208 -i '<hi>' -t 'ฅ^•ﻌ•^ฅ'
}
typeset -g POWERLEVEL9K_VIRTUALENV_LEFT_DELIMITER='('
typeset -g POWERLEVEL9K_VIRTUALENV_RIGHT_DELIMITER=')'
typeset -g POWERLEVEL9K_VIRTUALENV_SHOW_PYTHON_VERSION=true 
typeset -g POWERLEVEL9K_VIRTUALENV_VISUAL_IDENTIFIER_EXPANSION='🐍
typeset -g POWERLEVEL9K_VCS_BRANCH_ICON='\UE0A0 '

hurray!

AMIA 2024 – Monday (11/11/24) Recap

oracle

i started off with oracle session as i hadn’t really attended industry panels. i enjoyed some of the acronyms, like “stormed and normed.” also, the idea that oracle as a big company was late to the cloud / others had years ahead, so to catch up they hire the eng that build other clouds and ask for better faster cheaper. (?!)

from words to wonder

i dropped by there – so it appears that for some tasks fine-tuned bert is still performing better even than latest llm on token-level NER vs. document level NER (not entirely sure what this means). this makes sense since llm is trained for causal prediction (next token) vs . embeddings may have more sense of individual words (bidirectional) and may be better suited for such classifciation tasks. i found it interesting that they point out limitations that there are other ways to instruct llms – for my own research i found it difficult to figure out how many prompts i should use to feel confident in my results. i guess … just one lol

food entities

this paper looked more at zero shot and one shot for identifying food entities from patient generated data. chatgpt performed a lot better than rule-based systems trained on fairly clean data, since it could handle stuff like “bbq” for barbecue and “happy meal” for mcdonalds. the research was excited that we can now use more varied data without tailoring algorithms across datasets.

bioner

i finally got an order of magnitude sense, the presenter said that fine tuning the llm directly took around 30 mins on 4x A100 18 GB gpus. but other methods include llora and parameter efficient fine tuning (peft). interesting point to just say that there was not statistically significance for some of comparisions

nih common data element normalization

again emphasis that ensemble methods perform better. i guess this is true for humans too!

nabla

i also went to Q&A part of nabla, it was cool to sit in a room of practicing clinicians vs. rooms of fellow researchers that are slightly adjacent to my work. (i also miss the extremely technical nerds sometimes). i created a mastodon account (to use every half year or so between conferences? next time i’ll add it to my badge):

TIL: paperwork burden big driver of provider burnout (!), and “Ambient AI” shortens from 90 mins to 30 mins = can go home and spend time with family in evening, can see more patients #amia2024 nrobot@mastodon.social

https://mastodon.social/@nrobot/113467025691843019

i had no idea that the paperwork was so severe that it actually drives burnout. but it makes sense if it’s taking an extra 1.5 hrs of your life after work to catch up on all the paperwork !!

also this interesting quote “gen AI scribing [note to summary] is like ice cream — there’s some people who like it, but not many!”

WIG NLP

then i learned a bit about working groups, still not entirely sure what they are, but i have to be an AMIA member to access anyway (vs just attending the conference). they were trying to set up a mentor / mentee network but kind of struggling. in general i’ve noticed there’s less willingness to randomly connect and mentor than i expect. especially i heard fabled west coast startups willingness to help each other out. maybe it’s that there are many high level industry execs here.

yale – emulated clinical trial

trying to find cohort retrospectively — realized the biggest issue (spent years) is the messiness of the data. normal clinical trial is 100% accurate. if you have 6 parameters that have to be pulled out with nlp, and the current state of art is 80-90% accuracy, that comes down to 30-50% accuracy which is unacceptable for running an emulated clinical trial.

generally

i’ve heard a lot about how finding patients (and others aspects) of clinical trials are slowing down research in a major way. so there’s a ton of research into how to address this.

posters

then i caught up with my coworkers posters! since i have most reference on this i’ll post more later. basically presenting on the system side of a pilot to rank patients for screening, anecdotal catching patient with cancer that wouldn’t have been caught by old system. rolling out pilot to 28 hospitals soon. the clinicians present on the outcomes side of impact on patients, vs the informatics group presenting on rapid system roll-out

here i’ve started talking to random posters and connecting with people. i think i was just talking to people that were too busy / high level before. humbling for sure, and also confusing coming from socializing in startup circles i guess as i never integrated well into higher education :'(

TIL: paperwork burden big driver of provider burnout (!), and “Ambient AI” shortens from 90 mins to 30 mins = can go home and spend time with family in evening, can see more patients

generally

the nicest seminar was still the keynote on sunday, not sure if that’s because i’m flitting in and out of sessions instead of sticking in one session and talking to people

i skipped out on an entire half session, just went to sit and read a book. the sensory room is just a small conference room set on a different floor, but it was chairs at tables, no comfy couches, so i went to the lobby and sat in a comfy couch instead and put in headphones. actually that place was incredibly loud but it worked haha

dei event

i would not have gone on my own, at this conference i feel like not-minority lmao. but i went with my coworkers and the industry sponsor provided good food. at first it was just a few people and we stood by ourselves, but the chair / co-chair came to talk to us, and by the time that finished the room was incredibly full. i was really impressed by one of them who could tell that ray and rui are pronounced differently (rui has the tongue further back), legit thought he did linguistics. he connected it to christian beliefs which i found interesting, that the lord knew our true name before we were born, so really pronouncing names is important to him. i didn’t connect with anyone new (too burnt out) but i reconnected with folks from WINE.

career talk

i talked to one person about my life goals, and that was a good talk. i should likely not treat my supervisor as anything adversarial. i admit that it threw me off for him to say that i would be working under my coworker so that was the more important opinion for performance evals. and it has made me mrrr to hear that i should be doing the work of the position i want when i am being paid half the cost of that position to do that much work. but i do not think my supervisor meant to imply i would be working under my coworker no matter what.

VA event

there was also a VA event that we dropped by, but everyone there was super tired so we talked to one group of people and left. i again felt some regional politics at play slightly.

personal

okay, i feel like i’m totally at failing at connecting with people, but have to remind myself i’m like … 2 days into getting to know people here. and two months into having some new blurb to write about myself.

it’s been incredibly relaxing even if possibly fool hardy to not go around the exhibition hall trying to connect with companies.

also there is so much i need to learn, that i can feel free to list now that i have a job and a foot in the door (and emotional reserves). likert, rouge, llora, spearman, these are all terms i need to learn. maybe i’ll install chat on my phone just so i can learn those in the moment as i’m motivated and there’s context for how it’s used …

prior self

i can see how i must have come across to people before i got a job now. just kind of emotionally disconnected from everyone else, not wanting people’s pity and knowing they can’t help right away, and feeling so hopelessly caught up in my own life issues. i keep trying to remind myself. one day when i struggle to go to the bathroom on my own i’ll look back on this time in my life and be like “wow that was such a good time.”

or maybe i’ll get cancer earlier and just wish i had such normal problems where i thought i would live to old age. i’ll miss the time when i could still just call my parents and they didn’t have major health problems and i still had hope for them to live to 120. who knows, who knows.

general state of the world

i feel surprisingly detached from all my anger and fear about the elections in the united states. it barely takes up 1% of my mental headspace. it’s like 50% industry career new job, 30% technical skills worry, 15% relationships, 4% excitement about extremely random stuff on my backburner, 1% state of domestic and international world affairs / bitterness (how could we have someone who molests women as our president? and millions of people turned out to vote for him? i feel such shame and horror and despair and hopelessness that kamala harris is not our next president. all my degrees and accomplishments feel utterly useless. i feel so angry, so angry at anyone in my life that doesn’t hate trump. but remember — the world is not ending today or tomorrow, my friends can ally with me on other things, and my life will be okay).

high level planner

i’ve been trying to move in the opposite direction of my natural instincts. if i’m drawn to the climate and diversity and equity talks, i’ve been forcing myself to go in more purely technical directions. i would like to find my own pure happiness in technical implementation for the next few years independent of the impact. of course, from my perspective any work i do on AI is work I’m not doing on equity in AI and contributing to climate change in data centers. i’m not living up to my principles and the emotional energy i directed at it. but reminding myself that the alternate pathway is: get rich, make changes, donate to causes and empower others to work on these topics. i feel after my thesis on illicit massage industry and the energy i put into supporting my ex partner, it’s time to focus on just me. and nothing has to be permanent. it’s like i constantly have a higher-level planner — what should i be spending my time thinking about? usually the answer is, not what i’m thinking about right now. and it’s important but hopefully i can start tuning it down some day soon. i want to be thinking about interesting technical papers, where the field is going, cool packages i learned about recently, learning about monads and the latest bug in some code i wrote. hopefully next week i can finally start doing that.