<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://avikdas.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://avikdas.com/" rel="alternate" type="text/html" /><updated>2026-02-20T01:16:25+00:00</updated><id>https://avikdas.com/feed.xml</id><title type="html">Avik Das</title><subtitle>My name is Avik Das. I&apos;m a software developer with strong theoretical and mathematical foundations, as well as extensive industry experience. To me, technology is a tool for delivering meaningful products. In my spare time, I enjoy cooking, weightlifting and drawing.
</subtitle><author><name>Avik Das</name></author><entry><title type="html">How to be a better abuser</title><link href="https://avikdas.com/2025/08/04/how-to-be-a-better-abuser.html" rel="alternate" type="text/html" title="How to be a better abuser" /><published>2025-08-04T00:00:00+00:00</published><updated>2025-08-04T00:00:00+00:00</updated><id>https://avikdas.com/2025/08/04/how-to-be-a-better-abuser</id><content type="html" xml:base="https://avikdas.com/2025/08/04/how-to-be-a-better-abuser.html"><![CDATA[<p>Based on real events.</p>

<ol>
  <li>
    <p>Above all, be unpredictable. Being predictable means your victim can figure you out and respond accordingly. Instead, keep them on their toes, which will preoccupy their mental bandwidth and prevent them from making meaningful progress in your relationship.</p>
  </li>
  <li>
    <p>Separate cause and effect. Don’t give reasonable feedback in a timely fashion. When your victim does something that bothers you, don’t express your emotions calmly to them at that time. Instead, wait for a related, but otherwise innocuous trigger and blow up at them. They’ll learn never to trust your initial reactions and will never know if something they do or say will blow up in their face.</p>
  </li>
  <li>
    <p>Love them without liking them. Be obsessed with them, telling them how much they mean to you. But day-to-day, don’t enjoy their company and constantly try to change them. Say you want to spend your life with them, but tear down their hobbies and their lifestyle. Your love will keep them around, but they will feel terrible when enjoying the things that make them happy.</p>
  </li>
  <li>
    <p>Rapidly oscillate between praise and criticism. Be extreme about it. You don’t just enjoy being around them, they are the best thing that ever happened to you. Then, turn around and tell them how exactly how much they are hurting you just by being themselves.</p>
  </li>
  <li>
    <p>Target their insecurities. Don’t just tell them how you feel when their actions hurt you, attack the specific parts of their psyche they struggle with the most. Compare them to others, attack their sense of morality and tear down their identity. You don’t have to know why something triggers them, whether it was a past experience or their general upbringing, you just have to know how much it hurts.</p>
  </li>
  <li>
    <p>Finally, create an environment where it’s dangerous to say no. Criticize them constantly for asserting their boundaries. Then, ask for their consent for the things you want them to do. You may even respect their decision if they decline, but they won’t decline, because they don’t know how you’ll react.</p>
  </li>
</ol>]]></content><author><name>Avik Das</name></author><summary type="html"><![CDATA[Based on real events.]]></summary></entry><entry><title type="html">LLMs are like compilers, sort of</title><link href="https://avikdas.com/2025/05/05/llms-are-like-compilers-sort-of.html" rel="alternate" type="text/html" title="LLMs are like compilers, sort of" /><published>2025-05-05T00:00:00+00:00</published><updated>2025-05-05T00:00:00+00:00</updated><id>https://avikdas.com/2025/05/05/llms-are-like-compilers-sort-of</id><content type="html" xml:base="https://avikdas.com/2025/05/05/llms-are-like-compilers-sort-of.html"><![CDATA[<figure>
  <p><img src="/assets/images/2025-05-05-llms-are-like-compilers-sort-of/machines-ai-generated.jpg" alt="An AI-generate image of two monitor-like things, one displaying text, the other displaying binary and other incomprehensible text. To the left of the first monitor are some speech bubbles." /></p>
  <figcaption>An AI-generated image, appropriate for this post. The machine on the left is like an LLM, taking in prompts and producing code. The machine on the right is like a compiler, producing binaries. I tried really hard to generate a good image with AI, but my needs seem too esoteric. That's been my experience with AI coding too.</figcaption>
</figure>

<p>I’m no expert on LLMs and coding with AI. In fact, I feel like I’ve fallen behind. I’m still in the initial phases of trying out AI-augmented coding. This blog post is my attempt at addressing my own reservations about this new world by comparing current AI to early compilers. The audience is myself, but maybe it’ll help someone who’s hesitant to use AI in their day-to-day coding.</p>

<p>Whenever I have a gut reaction against AI coding, I remind myself: <strong>compilers faced the same backlash, but eventually, compilers (and high-level languages) enabled solving more complex problems faster, to the point of becoming indispensable tools.</strong> With that in mind, I shouldn’t dismiss AI coding.</p>

<p>As I sat down to write this blog post, I found someone else had already written a version of it: <a href="https://vivekhaldar.com/articles/when-compilers-were-the--ai--that-scared-programmers/">When Compilers Were the ‘AI’ That Scared Programmers</a>. I’ll rehash some of Vivek’s arguments, but Vivek is definitely pro-AI. I additionally want to explore another angle in my post.</p>

<h2 id="the-complaints-against-are-overblown">The complaints against are overblown</h2>

<p>My initial reaction to AI coding could apply almost point-by-point to early compilers:</p>

<ul>
  <li>
    <p>LLMs produce bad code. “Bad” can mean inefficient, or even buggy. Early compilers also produced bad code. But compilers got better, and so will AI.</p>
  </li>
  <li>
    <p>You’re giving up control. Same with (high-level) compilers, where you no longer decide exactly how your code maps to the actual execution on your machine. In exchange, you get to think about problems at a higher level, not worrying about… the execution on your machine.</p>
  </li>
  <li>
    <p>You lose out on understanding the fundamentals of your software, so when things go wrong, you can’t fix it. For many people, being able to solve complex problems is more valuable than the few times when things go catastrophically wrong. Think scientists who are just trying to model something, and they don’t actually care about being expert programmers or computer scientists. Meanwhile, for those of us whose core job is writing software, compilers haven’t changed the fact that learning computer science and understanding low-level programming is still useful, hence the utility of a solid computer science degree.</p>
  </li>
</ul>

<p>In my time as a programmer, when the end goal was solving a problem, and writing code was just a means to an end, I reached for a high-level language. (Sometimes the constraints, such as writing for a specific hardware target, made that impossible, but I’m talking about software I’ll run on my own computer or similar.) It’s just more productive. Maybe I’ll get to the point where I reach for an LLM.</p>

<p>As a side note: a lot of these same arguments apply to modern IDEs, with their fancy GUIs and auto-completion!</p>

<h2 id="the-problem-with-llms-code-is-a-liability">The problem with LLMs: code is a liability</h2>

<p>There is one fundamental difference I see with LLMs that I haven’t seen addressed. The way AI coding works today is the LLM spits out code that you have to maintain. The original prompts are no longer the source of truth, the generated code is. When you fix a bug, the generated code is an input to the LLM, and the next iteration changes that code incrementally.</p>

<p>That’s like saying the binary output of a compiler is what you check into source control. The binary is what you edit and the machine code is what you debug. But that’s not how things work today. Today, the original source code is the source of truth. As compilers improve, you recompile the source code to produce a better binary. When you have a bug, you look for logical errors in the source code and modify that until the produced binary does what you want. This would be as if you stored only the LLM prompts, and you evolved the prompts incrementally, running the LLM from a blank slate each time you want to execute your software. (I’m ignoring incremental builds, but in general, a clean build is always possible with a compiler.)</p>

<p>One day, AI may become deterministic enough that we would indeed just store the prompts as our source of truth. Even compilers can be non-deterministic when performing optimizations, but as long as they preserve the semantics of the source code, we’re okay with giving up full control over the machine code.</p>

<h2 id="coding-for-fun">Coding for fun</h2>

<p>Given all this, you’d think I’m convinced I have to use LLMs all the time. After all, I wouldn’t use a barebones text editor and start writing in assembly, right? I guess I’ll always be an odd one out, because that’s exactly something I like to do for fun! I’ve been <a href="/2025/01/01/reflecting-on-ten-years-of-my-personal-project.html">writing a compiler for over a decade</a>, and there’s a lot of assembly. In fact, there’s a lot of hand-assembled machine code, and I like it that way. I typically use Vim, with minimal auto-completion and no “go to definition”. Earlier in my career, I made it a point to make one of my work projects “Vim-friendly”: if you couldn’t keep the program in your head and navigate around the codebase by hand, the codebase was too complex.</p>

<p>So even in the 2020’s, when you’d think compilers and IDEs are a given, I enjoy artisanal, hand-crafted code after all. Still, a tool is a tool, and I should learn how to use LLMs to enhance my code. I’m not ready to be left behind.</p>]]></content><author><name>Avik Das</name></author><summary type="html"><![CDATA[An AI-generated image, appropriate for this post. The machine on the left is like an LLM, taking in prompts and producing code. The machine on the right is like a compiler, producing binaries. I tried really hard to generate a good image with AI, but my needs seem too esoteric. That's been my experience with AI coding too.]]></summary></entry><entry><title type="html">Reflecting on ten years of my personal project</title><link href="https://avikdas.com/2025/01/01/reflecting-on-ten-years-of-my-personal-project.html" rel="alternate" type="text/html" title="Reflecting on ten years of my personal project" /><published>2025-01-01T00:00:00+00:00</published><updated>2025-01-01T00:00:00+00:00</updated><id>https://avikdas.com/2025/01/01/reflecting-on-ten-years-of-my-personal-project</id><content type="html" xml:base="https://avikdas.com/2025/01/01/reflecting-on-ten-years-of-my-personal-project.html"><![CDATA[<figure>
  <p><img src="/assets/images/garlic-logo.png" width="192" height="256" alt="Parentheses forming the shape of a garlic clove, with the word Garlic written underneath" /></p>
  <figcaption>The Garlic logo</figcaption>
</figure>

<p>On April 12, 2014, I wrote a <a href="https://github.com/avik-das/garlic/blob/6aabfdd65bf585202948586215e0e3618cd91a17/s-expr.rb">quick and dirty interpreter</a> for a Scheme-like language. In the next week, I ripped out that code and laid the foundation for the compiler I’ve been working on for almost eleven years! I didn’t have time to reflect on it at the time of the ten-year anniversary, so I’m writing my thoughts down now.</p>

<p>The language and its compiler are called Garlic, a name I’ll talk about later.</p>

<h2 id="why-write-a-compiler">Why write a compiler?</h2>

<p>Ultimately: because I enjoy it.</p>

<p>In college, I took a class on compilers with a close friend. We didn’t do well on the second project—static analysis—partly because it was a hard project and partly because we had little time with all the other classes we were taking. The third project—native code generation—was due the week between classes and finals, allowing us to put in extra time. We did amazing on that project. Even with that success, there were many enhancements we didn’t have time for. I remember implementing integers as objects on the heap, while another student used <a href="https://en.wikipedia.org/wiki/Tagged_pointer">tagged pointers</a>, resulting in a significant speedup in our (admittedly contrived) test programs. I knew I wanted to spend more time in this domain.</p>

<p>That explains why I chose a Scheme-like language, as parsing would not be a significant effort, and I could focus on the code generation. That said, I did want to revisit the static analysis too. Almost two years after graduation, meaning almost three years after the compilers class, I finally sat down to pick up compilers again.</p>

<p>At this point, my motivation comes down to:</p>

<ol>
  <li>
    <p><strong>Learning</strong>. A one-semester class can only go so deep. The class is also (rightly) focused on fundamental concepts, less so on the specifics of individual architectures or file formats.</p>
  </li>
  <li>
    <p><strong>Exploring</strong> different design decisions, instead of picking one and implementing it due to time pressure.</p>
  </li>
  <li>
    <p><strong>Challenging</strong> myself.</p>
  </li>
</ol>

<h2 id="phase-one-the-ruby-implementation">Phase one: the Ruby implementation</h2>

<p>I chose to write Garlic in Ruby, hence the name <strong>G</strong>arlic’s <strong>A</strong> <strong>R</strong>uby <strong>L</strong>isp <strong>I</strong>mplementation <strong>C</strong>ompiler. The name wasn’t chosen until almost exactly a year later, and until then, even the repo was simply <code class="language-plaintext highlighter-rouge">scheme-compiler</code>.</p>

<p>Ruby is a language I enjoy using and this is my personal project. The tech stack for the main implementation still looks like this:</p>

<ul>
  <li>
    <p>Ruby as the compiler implementation language.</p>
  </li>
  <li>
    <p>A mix of C and x86-64 assembly for the runtime. This code is not executed during compilation.</p>
  </li>
  <li>
    <p>The <a href="https://kschiess.github.io/parslet/">Parslet library</a> for parsing. Told you I didn’t want to spend much time on the parsing!</p>
  </li>
  <li>
    <p>The compiler outputs x86-64 assembly <em>as text files</em> that are then fed into GCC or Clang. Those compilers handle the remaining steps of compiling any additional C code, linking everything together and producting an executable file.</p>
  </li>
</ul>

<p>The choice to rely on a C compiler and linker was based on what we did in my compilers class. To be fair, in other classes, I’d written an assembly-to-opcode assembler, so the compilers class was more about focusing on the parts we hadn’t learned before.</p>

<p>As I added more features to the compiler, I ran into some interesting challenges. Some of these do assume some knowledge about compilers to understand.</p>

<h3 id="the-garlic_fncall-helper">The <code class="language-plaintext highlighter-rouge">garlic_fncall</code> helper</h3>

<p>Objective-C is famous for its <code class="language-plaintext highlighter-rouge">objc_msgSend</code> function, a tightly-optimized piece of code underlying the entire message passing objected-oriented nature of the language. Mike Ash, one of the premier experts in the Mac development ecosystem, wrote multiple articles about this function, including <a href="https://www.mikeash.com/pyblog/friday-qa-2012-11-16-lets-build-objc_msgsend.html">Let’s build <code class="language-plaintext highlighter-rouge">objc_msgSend</code></a>. Every method call in the language goes through this function.</p>

<p>I ended up with a similar design, calling my version <code class="language-plaintext highlighter-rouge">garlic_fncall</code> (initially <code class="language-plaintext highlighter-rouge">scm_fncall</code> before the name change). I think the idea of having shared code coordinate function calls is a common paradigm. For example, in other object-oriented languages, there might be some common code to look up method implementations in a virtual table, allowing for features such as inheritence. My version also went through many iterations, adding support for variadic functions, optimizing the code and eventually allowing the calling of user-defined C-code!</p>

<h3 id="stack-alignment-on-x86-64">Stack alignment on x86-64</h3>

<p>One of the real-world problems I ran into was the stack alignment needed to follow the <a href="https://gitlab.com/x86-psABIs/x86-64-ABI">System V Application Binary Interface (ABI)</a>. An ABI defines, among other things, a <em>calling convention</em>, rules for how functions are called and what the called functions can expect from their callers. Because the generated code interfaces with code produced by other compilers, I need to follow these conventions to ensure compatibility.</p>

<p>One requirement is to ensure, when a function call is made using the <code class="language-plaintext highlighter-rouge">call</code> instruction, the stack needs to be aligned to a 16-byte boundary. There’s a catch, in that the call needs to be made with the stack alignment offset by 8 bytes, because the return address will be pushed to the top of the stack. The stack has to be aligned <em>after</em> the return address is pushed!</p>

<p>When testing on my Linux machine, I ignored this requirement and had no problems. When I tried the compiler on a Mac, the tech debt finally caught up to me. The commit history doesn’t convey the hair-pulling that ensued as I tried to patch the issue incrementally. Finally, I figured out a <a href="https://github.com/avik-das/garlic/commit/e7b83283c1942500ff1e50ba701a2304a076a7e5">general approach</a> that has served me well since then. In retrospect, the solution is easy, so maybe I just didn’t have enough years of experience back then.</p>

<h3 id="designing-and-implementing-modules">Designing and implementing modules</h3>

<p>This was a fun excursion. Instead of looking at the landscape of Scheme implementations at the time, I asked myself what kind of code <em>I</em> wanted to write to define a library or module for reuse, and what the consequences would be for code generation. Certainly, some of this was influenced by my work with Node.js at the time. With that in mind, a few months into the project, I added <a href="https://github.com/avik-das/garlic/commit/d857e4a5aa1e68f79fc88392782fbc44dc6306bc">module support</a>. Here’s how it looks:</p>

<div class="language-scheme highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">; I can write any code I want in the module</span>
<span class="p">(</span><span class="k">define</span> <span class="nv">var-1</span> <span class="o">...</span><span class="p">)</span>
<span class="p">(</span><span class="k">define</span> <span class="p">(</span><span class="nf">fn-2</span> <span class="o">...</span><span class="p">)</span> <span class="o">...</span><span class="p">)</span>

<span class="c1">; Then choose what to export. I don't have to export everything.</span>
<span class="p">(</span><span class="nf">module-export</span>
  <span class="nv">var-1</span>
  <span class="nv">fn-1</span><span class="p">)</span>
</code></pre></div></div>

<p>And in the consumer module:</p>

<div class="language-scheme highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">(</span><span class="nf">require</span> <span class="s">"my-module"</span><span class="p">)</span>

<span class="p">(</span><span class="nf">my-module:fn-1</span> <span class="nv">my-module:var-1</span><span class="p">)</span>
</code></pre></div></div>

<p>Because this was my pet project, I was chose to explore some interesting quality-of-life functionality: the ability to import symbols into the global namespace and renaming modules as I import them. The most interesting part was using my chosen syntax to enable analyzing what symbols are available throughout the program and providing clear error messages when a symbol is not defined or visible. I felt especially proud of this part because catching references to undefined symbols was an area of much frustration during my college class.</p>

<h3 id="calling-into-c-code">Calling into C code</h3>

<p>Aside from the C code in the language runtime, I also added support for defining your own Garlic functions in C. The original implementation used a different syntax, using <code class="language-plaintext highlighter-rouge">ccall</code> to indicate calling a C function. In turn, the compiler generated different code for calling a Garlic function versus calling a C function. Eventually, I was able to unify the syntax, making it as easy to call a Garlic function written in C as it was to call a Garlic function written in Garlic.</p>

<p>To achieve this, I had to think about developer ergonomics, as well as make some general improvements to my code. One minor improvement was to ensure I was truly following the System V ABI, including passing arguments on the stack in reverse order. The idea was to call Garlic-native functions in the same way you would call a C function.</p>

<p>Still, there were going to be differences in how Garlic-native and C functions were going to be called, hence the <code class="language-plaintext highlighter-rouge">ccall</code> syntax. What I finally landed on was to statically analyze the C code to understand what functions it would export, and use that information to create wrappers around only those functions to set up a little bit of extra code when calling those functions. This eliminates the <code class="language-plaintext highlighter-rouge">ccall</code> syntax because the developer no longer needs to explicitly indicate they are calling a C function. The trade-off is that the way I have to analyze the C code is limited, forcing developers to write their exports without comments or pre-processor macros. For example, here’s what my string module exports look like in C:</p>

<div class="language-c highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">garlic_native_export_t</span> <span class="n">string_exports</span><span class="p">[]</span> <span class="o">=</span> <span class="p">{</span>
    <span class="p">{</span><span class="s">"null?"</span><span class="p">,</span> <span class="n">nullp</span><span class="p">,</span> <span class="mi">1</span><span class="p">},</span>
    <span class="p">{</span><span class="s">"concat"</span><span class="p">,</span> <span class="n">garlic_internal_string_concat</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">},</span>
    <span class="p">{</span><span class="s">"concat-list"</span><span class="p">,</span> <span class="n">concat_list</span><span class="p">,</span> <span class="mi">1</span><span class="p">},</span>
    <span class="p">{</span><span class="s">"string-tail"</span><span class="p">,</span> <span class="n">string_tail</span><span class="p">,</span> <span class="mi">2</span><span class="p">},</span>
    <span class="p">{</span><span class="s">"symbol-&gt;str"</span><span class="p">,</span> <span class="n">symbol_to_str</span><span class="p">,</span> <span class="mi">1</span><span class="p">},</span>
    <span class="p">{</span><span class="s">"string=?"</span><span class="p">,</span> <span class="n">string_equalp</span><span class="p">,</span> <span class="mi">2</span><span class="p">},</span>
    <span class="p">{</span><span class="s">"at"</span><span class="p">,</span> <span class="n">character_at</span><span class="p">,</span> <span class="mi">2</span><span class="p">},</span>
    <span class="p">{</span><span class="s">"downcase"</span><span class="p">,</span> <span class="n">downcase</span><span class="p">,</span> <span class="mi">1</span><span class="p">},</span>
    <span class="mi">0</span>
<span class="p">};</span>
</code></pre></div></div>

<p>I’m happy with this trade-off, as it enables effective static analysis.</p>

<p>The other design decision I needed to make was the API I exposed to the C code. I pulled on my knowledge of famous C APIs, like in the Ruby and Python worlds, and I’ve been happy with the ergonomics of writing C extensions. You can see the <a href="https://github.com/avik-das/garlic/blob/ac5654a3128f30d8147e923c26652486a08ab9a9/stdlib-includes/garlic.h">full C API</a> at the time of writing.</p>

<h2 id="phase-2-the-meta-circular-implementation">Phase 2: the meta-circular implementation</h2>

<p>A year-and-a-half into the project, I was happy with the ability to write interesting programs, like a <a href="https://github.com/avik-das/garlic/blob/308e0bcb72fb4ef715e637fb3e2727c465e04042/http-test/server.scm">little web page</a> served by an embedded web server (<a href="https://github.com/avik-das/garlic/blob/c51c18e2d0d819037a91cf2098951c476e9b384b/stdlib-includes/http.c">C wrapper around microhttpd</a>, <a href="https://github.com/avik-das/garlic/blob/308e0bcb72fb4ef715e637fb3e2727c465e04042/stdlib-includes/html.scm">HTML generation library in Garlic</a>).</p>

<p>I wanted the next step of the compiler journey to be macro support, the ability to run Garlic code <em>at compilation</em> time to more easily extend the language. Unfortunately, I realized this would mean either creating a parallel interpreter for the language to run during compilation or finally emit raw machine code instead of assembly text. The latter sounded more appealing, but it would be a large effort. December 2015, I decided that instead of re-implementing the code generation within the Ruby implementation, I might as well rewrite the entire compiler in Garlic! That was the start of the <a href="https://github.com/avik-das/garlic/commit/0b0a28b01f355eb8ebafb69d315e0b4ca92552c1">recursive compiler</a>. I promptly neglected the project for almost two years after.</p>

<p>Since then, however, I have been putting my focus into this re-implementation. The goal is to rename the project to <strong>G</strong>arlic’s <strong>A</strong> <em><strong>R</strong>ecursive</em> <strong>L</strong>isp <strong>I</strong>mplementation <strong>C</strong>ompiler. In the process, I’ve learned a lot.</p>

<h3 id="crafting-a-usable-language">Crafting a usable language</h3>

<p>One goal I had for Garlic was to make a language that was useful for writing real programs. The language would never be used at a company trying to make money, but I wanted to use the language personally for more than just one-off test programs. Writing a compiler is a bit of code, and it has some interesting I/O, string manipulation and data processing. All of that means the implementation language should be expressive enough to handle the complexities in this domain.</p>

<p>To that end, I’ve found gaps in the language that, when implemented, added significant expressivity into the language. The one I’m most proud of is destructuring assignments, allowing me to write code like:</p>

<div class="language-scheme highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">(</span><span class="k">let</span> <span class="p">((</span><span class="nf">a</span> <span class="o">.</span> <span class="nv">b</span><span class="p">)</span> <span class="p">(</span><span class="nf">fn-returning-pair</span><span class="p">))</span>
  <span class="p">(</span><span class="k">if</span> <span class="p">(</span><span class="nb">&gt;</span> <span class="nv">a</span> <span class="nv">b</span><span class="p">)</span> <span class="nv">a</span> <span class="nv">b</span><span class="p">))</span>
</code></pre></div></div>

<p>This is super useful for the types of complex data processing needed in a compiler, as it allows for passing around multiple values easily.</p>

<p>I’ve also extended the standard library to include string and file processing. However, both in terms of language features and standard library functionality, I’ve tried to avoid throwaway work:</p>

<ul>
  <li>
    <p>Any language features I want have to be added to the Ruby implementation. I will have to reimplement that feature in Garlic later.</p>
  </li>
  <li>
    <p>Any standard library function I write in C <em>may</em> need to be rewritten in Garlic, “somehow”. This depends on whether I end up reimplementing C module support, but without relying on GCC or Clang, I’m not sure what my plan is.</p>
  </li>
</ul>

<p>Still, I’m proud of the language I’ve created, because in my experience, it’s good enough to write code that wrangles the complexity of writing a compiler.</p>

<h3 id="elf-file-generation">ELF file generation</h3>

<p>Removing the dependency on an existing C compiler means I needed to output an executable that my operating system can load and run. On Linux, this meant constructing an Executable and Linkable Format (ELF) file. In turn, that meant I needed to understand the structure of an ELF file inside and out.</p>

<p>In the first half of 2023, I put some work into understanding ELF files. I didn’t make any commits to the compiler during that time, but two big artifacts from that time are:</p>

<ul>
  <li>A way to generate a reference, minimal ELF file <a href="https://github.com/avik-das/garlic/blob/1632510c1a4a605a9ccb5ab23fbec1f3e78b2e19/recursive/elf-exploration/write-elf.rb">byte-by-byte</a>.</li>
  <li>An <a href="https://scratchpad.avikdas.com/elf-explanation/elf-explanation.html">interactive visualization</a> of those bytes (not very mobile friendly, unfortunately).</li>
</ul>

<p>These artifacts were invaluable as I dropped and picked up the project over the course of the next year. With this understanding in hand, I was finally able to create a library to generate these files when given some machine code as the contents:</p>

<div class="language-scheme highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">(</span><span class="nf">require</span> <span class="s">"./elf-x86-64-linux-gnu"</span> <span class="nv">=&gt;</span> <span class="nv">elf</span><span class="p">)</span>
<span class="p">(</span><span class="nf">require</span> <span class="nv">file</span><span class="p">)</span>

<span class="p">(</span><span class="k">define</span> <span class="nv">test-code</span>
  <span class="o">'</span><span class="p">(</span><span class="nf">0x48</span> <span class="mi">0</span><span class="nv">xc7</span> <span class="mi">0</span><span class="nv">xc0</span> <span class="mi">0</span><span class="nv">x3c</span> <span class="mi">0</span><span class="nv">x00</span> <span class="mi">0</span><span class="nv">x00</span> <span class="mi">0</span><span class="nv">x00</span> <span class="c1">; mov  $60, %rax</span>
    <span class="mi">0</span><span class="nv">xbf</span> <span class="mi">0</span><span class="nv">x2a</span> <span class="mi">0</span><span class="nv">x00</span> <span class="mi">0</span><span class="nv">x00</span> <span class="mi">0</span><span class="nv">x00</span>           <span class="c1">; mov  $42, %edi</span>
    <span class="mi">0</span><span class="nv">x0f</span> <span class="mi">0</span><span class="nv">x05</span><span class="p">))</span>                        <span class="c1">; syscall</span>

<span class="p">((</span><span class="nf">compose</span>
   <span class="p">(</span><span class="k">lambda</span> <span class="p">(</span><span class="nf">b</span><span class="p">)</span> <span class="p">(</span><span class="nf">file:write-bytes</span> <span class="s">"generated-elf"</span> <span class="nv">b</span><span class="p">))</span>
   <span class="p">(</span><span class="k">lambda</span> <span class="p">(</span><span class="nf">e</span><span class="p">)</span> <span class="p">(</span><span class="nf">elf:emit-as-bytes</span> <span class="nv">e</span><span class="p">))</span>
   <span class="p">(</span><span class="k">lambda</span> <span class="p">(</span><span class="nf">e</span><span class="p">)</span> <span class="p">(</span><span class="nf">elf:add-executable-code</span> <span class="nv">e</span> <span class="ss">'main</span> <span class="nv">test-code</span><span class="p">)))</span>
 <span class="p">(</span><span class="nf">elf:empty-static-executable</span><span class="p">))</span>
</code></pre></div></div>

<p>The idea is the machine code will be generated by the code generation module, leaving very little boilerplate to actually wrap that machine code into an executable. Since then, I’ve even been able to dynamically generate the code and the ELF file generator makes that code executable!</p>

<p>The reason this functionality was so difficult is because ELF files contain many cross-references between different parts of the file. It’s not possible to generate an ELF file in one pass, as references between sections have to be resolved, and those resolutions depend on the size and contents of the other sections in the file. I’m very proud of having cracked this problem, and that too in Garlic code that I find understandable. (We’ll see how I feel when I come back to this code after a break!)</p>

<h3 id="better-error-messages">Better error messages</h3>

<p>An unexpected benefit of the rewrite was creating infrastructure for better error reporting than even the original Ruby implementation supported. It wouldn’t be hard to improve the error handling in the Ruby implementation, since Parslet gives the necessary information should I choose to use it. However, I’m proud I was able to pass around the necessary information about lines and columns <em>within my hand-written lexer/parser</em>. See this beautiful error report:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Compilation failed (2 errors)

  ERROR: undefined variable 'undefined-variable' (test-errors.scm:2:10)

    2| (display undefined-variable)
       ---------^

  ERROR: undefined variable 'undefined-variable-again' (test-errors.scm:4:10)

    4| (display undefined-variable-again)
       ---------^
</code></pre></div></div>

<p>I definitely took inspiration from modern languages like Rust here.</p>

<h2 id="looking-forward-to-the-future">Looking forward to the future</h2>

<p>With a flurry of activity in the last few days, I’m happy with the progress I’ve made. I hope Garlic will be a lifelong project for me, and I don’t know if I’ll ever call it finished. Some of the things I see in the future:</p>

<ul>
  <li>
    <p>Obviously, finish the code generation. I’ll have to think about problems like dynamic linking and relocatable code for any of this to be scalable. That said, I’m excited at the possibility of making Garlic support low-level programming to avoid the need for C-based scaffolding.</p>
  </li>
  <li>
    <p>Finally adding macro support.</p>
  </li>
  <li>
    <p>Retiring the Ruby implementation once the recursive implementation is finished.</p>
  </li>
  <li>
    <p>Maybe one day writing a hobbyist operating system and using Garlic as the system language?</p>
  </li>
</ul>

<p>Until then, I’ll keep hacking away at this project that has occupied over a decade of my life.</p>]]></content><author><name>Avik Das</name></author><summary type="html"><![CDATA[The Garlic logo]]></summary></entry><entry><title type="html">On effective communication</title><link href="https://avikdas.com/2024/10/15/on-effective-communication.html" rel="alternate" type="text/html" title="On effective communication" /><published>2024-10-15T00:00:00+00:00</published><updated>2024-10-15T00:00:00+00:00</updated><id>https://avikdas.com/2024/10/15/on-effective-communication</id><content type="html" xml:base="https://avikdas.com/2024/10/15/on-effective-communication.html"><![CDATA[<figure>
  <p><img src="/assets/images/2024-10-15-on-effective-communication/talk-it-out.png" width="256" height="256" alt="Three speech bubbles spelling out Talk it Out" /></p>
  <figcaption>The original logo from Talk it Out</figcaption>
</figure>

<p>A few years ago, I published <em>Talk it Out</em>, a series of 1-2 minute mini-podcasts around effective communication on a platform called Jam, or Just a Minute. The service has since shut down, so I’m re-publishing my series as one blog post. Huge thanks to <a href="https://www.linkedin.com/in/petejadavies/">Pete Davies</a> and <a href="https://www.linkedin.com/in/chrispruett/">Chris Pruett</a> for providing the platform and encouraging me to get my thoughts out into the open.</p>

<p>Contradictary to the guidance in this post, the sections below may seem a bit disjointed. Initially, they were weekly episodes that were meant to stand alone.</p>

<hr />

<h2 id="why-communication">Why communication?</h2>

<p>Everyone talks about how important communication is, but why? So many conflicts or missed expectations happen because of poor communication. Think about some of these examples:</p>

<ol>
  <li>
    <p>You’re arguing with your partner, or a friend. You just can’t understand why the other person doesn’t get it! Poor communication.</p>
  </li>
  <li>
    <p>You work somewhere that values good ideas. Fantastic! But how do people know your idea is good or that it even exists, if you don’t tell them?</p>
  </li>
  <li>
    <p>Ever had to convince someone to help you with something? They can’t help you if you don’t tell them what you need in the first place.</p>
  </li>
</ol>

<p>Communication happens in all sorts of ways, written, verbal, body language. No matter the medium, you need to know what you want to say, and say exactly that so you don’t drown out your main point. All of this takes practice. As long as you’re working with other people, you need to be an effective communicator. This post is meant to help you be just that.</p>

<h2 id="the-1-2-3-of-effective-communication">The 1-2-3 of effective communication</h2>

<p>There’s a lot of ways to improve communication, but if I’m in a pinch, the most important tip is the 1-2-3 of effective communication:</p>

<ol>
  <li>What does the audience already know?</li>
  <li>What does the audience want to know?</li>
  <li>How do you connect the dots?</li>
</ol>

<p>Let’s break this down a bit.</p>

<p>One, <strong>what does the audience already know?</strong> This is how you avoid that dreaded presentation where everyone is bored out of their minds because either they know the material, or it’s all going over their heads. I’ve even seen presentations that do both by repeating the basics and skimming over the hard stuff!</p>

<p>Two, <strong>what does the audience want to know?</strong> If you’ve ever been to a talk that never got to the point, you know what I’m talking about. You have to actually give your audience what they want!</p>

<p>And three, <strong>how do you connect the dots?</strong> This is your job as a communicator. But since you know the starting and ending points, you have the tools to get there.</p>

<p>Follow these principles, and your communication will be focused, engaging and actually useful to your audience.</p>

<h3 id="know-your-audience">Know your audience</h3>

<p>It’s easy to think communication is about you, the person doing the communicating. But really, it’s about the audience. If they don’t understand what you’re saying, all your work is wasted.</p>

<p>I’m a software engineer. All the time at work, I collaborate with people who are not as technical as me, because their expertise is elsewhere. Designers and product managers, for example. The legal team. Marketing! These people are smart, they just specialize in different areas than me.</p>

<p>In one job, I had to convince a bunch of non-technical people why investing in making our software bug-free was so hard to do. The thing is, the product integrated with a bunch of external data sources and put it all together for our users. That also means a lot of places for things to go wrong, because communication between the different sources could break down. If you’re a technical person, that description is obvious, but if you’re not technical… well, it still might make sense, kind of. But the scale of the problem doesn’t really sink in.</p>

<p>So what I did was: I got my coworkers to stand in a line and pass sticky notes between each other. One person was the mobile app, another person was the database where the data was stored, and so on. Showing a little bit of data on the app took a lot of steps. That showed how complicated the system was, and how many places things could go wrong.</p>

<p>What the others lacked wasn’t intelligence. They lacked experience working with software systems, and experience seeing things go wrong. By having them act out the system’s behavior, I got them “in the trenches” in a way that was interactive and memorable.</p>

<p>So next time you need to explain something complicated to people with a different set of knowledge than you, think about what’s missing in their understanding and that’s where you want to focus.</p>

<h3 id="what-does-your-audience-want">What does your audience want?</h3>

<p>Understanding what your audience already knows isn’t enough. You have to understand what they want to know.</p>

<p>You know those app or product websites that are supposed to get you to download or buy the product? Here’s a mistake I see all the time on those pages: nothing but a list of features. Okay, but what problem is the product trying to solve? When someone ends up on your product page, they want to know if you can solve their problem. For that, you need to tell them exactly what problem your product is solving.</p>

<p>Or another scenario is when my manager and my teammate ask me how my project is going. Both are technical enough that I can give them the same answer, but there’s some subtext. My teammate wants to know when my part will be done so she can build on top of it for her part, so I’ll talk about how I’ll have enough done in the next two days that she can move forward while I continue working on the pieces she doesn’t depend on. Meanwhile my manager wants to know if there’s anything he can help me with, so I’ll give a broad overview and highlight the parts where I’m waiting for another team. Same question, but since the other person wants to know something different, I can communicate more effectively anticipating their needs.</p>

<p>Don’t worry, I don’t have to read their minds. If I’m not sure, I should ask them for clarification before I answer. Once I know them well enough, I won’t even need to do that.</p>

<p>So if you want to make sure your communication is actually hitting the spot, understand what your audience wants to know and give them exactly that.</p>

<h2 id="structuring-communication">Structuring communication</h2>

<p>To ensure your communication has the maximum impact, start with a clear thesis statement, tie your supporting points back to that thesis, and do so incrementally.</p>

<h3 id="have-a-clear-thesis-statement">Have a clear thesis statement</h3>

<p>One of the most effective things you can do to get your point across is to just say it clearly, and right at the beginning. In other words, start with a clear thesis statement.</p>

<p>What’s a thesis statement? A thesis statement is a sentence that references the topic you’re talking about and states your opinion of the topic.</p>

<p>Have you ever read something and by the end, you didn’t really understand what the point was? Or maybe you wrote an email, only to have people get the wrong message from it? What happened was you didn’t tell the reader clearly and decisively what you wanted them to get out of your words. Back when I was writing about hiring practices in the tech industry, a lot of what I was writing about was controversial. But I always made sure to state exactly what my opinion was in one sentence, so if my readers read nothing else, they would at least hear my central argument. Plus, it helped me, as the writer, make sure I even had a point in the first place!</p>

<p>Not only should you have a thesis statement, but you should put it up near the beginning, maybe even as the first sentence. That will put your argument in the reader’s mind, and they’ll keep it in their mind as they read the rest of your article or email. Now, you might be thinking you don’t want to present an opinion without some evidence to back it up, and that’s great! But don’t worry, you’ll still be backing up your opinion. The difference is, the reader will already know what you’re trying to prove with your evidence.</p>

<p>(You might hear about the thesis statement referred to as Bottom Line Up Front, or BLUF.)</p>

<p>Now, here’s my challenge to you: read this section and see if you can find my thesis statement. Here’s a hint: it’s right up front!</p>

<h3 id="back-up-your-thesis">Back up your thesis</h3>

<p>With your thesis statement out of the way, the next step is making sure the rest of your communication backs up your thesis statement.</p>

<p>Every section, every paragraph, every sentence ultimately should tie back to your central point and support that point. When I helped people out with their writing, the most common feedback was to ask how each point they’re making ties back to their thesis. Okay, it actually was that they didn’t have a thesis, but it was hard for them to come up with a thesis precisely because the different paragraphs didn’t back up a coherent point. It’s like they were writing two or more separate articles. Even worse, those separate articles contradicted each other!</p>

<p>That’s not to say you should omit any evidence against your main point. You can incorporate data that doesn’t back up your central argument, but tie it back by saying why you still believe in your thesis after all. That’s how you make your argument bulletproof.</p>

<p>At the end of the day, you’re trying to make a point, and the only way to make that point is to consistently provide evidence to back up that point. Everything else is just confusing.</p>

<h3 id="the-pyramid-style">The Pyramid Style</h3>

<p>If you want to keep your audience’s attention, one important tool is the <strong>Inverted Pyramid Structure</strong>. With the Inverted Pyramid Structure, you put the most important information at the beginning, then reveal increasingly minor details over time.</p>

<p>The Inverted Pyramid Structure hooks your audience’s attention and saves them time. Since the most important, high-level summary is right at the beginning, your reader knows right away whether they care about what you’re saying. This is why a thesis statement is so important: it’s the most important point, right up front. But one step further, the Inverted Pyramid Structure means your reader can keep going, exactly to the point where they have enough information. Everything after that is a detail not worth their time.</p>

<p>Aside from saving your audience’s time, the Inverted Pyramid Structure helps them understand you better. Each point you make primes their brain to contextualize what you’re going to say next. No more reading a detail and not even knowing what it’s talking about!</p>

<p>With that, I urge you to look for how I applied the Inverted Pyramid Structure in this section. In fact, once you recognize this structure, you’ll see it everywhere!</p>

<h2 id="communicating-the-right-thing">Communicating the right thing</h2>

<p>Communicating clearly means nothing without the right substance. The next few sections discuss what you should talk about in the first place.</p>

<h3 id="talk-about-what-you-know">Talk about what you know</h3>

<p>In high school, presentations always required a lot of preparation. The topic was assigned by the teacher, and I’d have to do research to learn about that topic, almost memorizing what I was going to present. In college and later in the workforce, I got to give presentations on projects I was already working on, and the process was way smoother. I had to give a 45 minute presentation on my undergraduate research? Easy!</p>

<p>If you know the topic you’re talking about, you can adapt to questions others might have. You can deep dive into areas people respond to. Above all, you’re spending less energy recalling the basic facts, so you can talk more confidently. There’s even more trust when people understand you’re knowledgeable about the topic. All of these factors make your message more digestible to your audience.</p>

<p>And it’s okay if you’re not an expert. It may be that the topic you’re actually knowledgeable about is being a beginner at learning something! Talking about learning something new, from a beginner’s perspective, is still valid.</p>

<p>Whatever the case, don’t wing it. Talk about what you know.</p>

<h3 id="agree-on-the-problem">Agree on the problem</h3>

<p>One of the most common sources of miscommunication I see is when two people are arguing logically about something, just not about the same thing!</p>

<p>In my field of software engineering, one person might be proposing a solution to make the app faster, even if that means there are some errors here and there, while another person is proposing a solution to make the app less error-prone, even if it gets slower. Both perspectives are useful, but maybe the user is complaining about the app taking up too much space on their device! Without knowing what problem is being solved, your logical solutions may make the other person feel like they’re not being heard.</p>

<p>To avoid this, take the time to state the problem early on. What’s wrong that needs solving? What requirements should a good solution meet? And go deep! Don’t just say you want to make your app better; say you’re looking to make the app take up less space on more limited phones. Often, you can find a discrepancy right there, and you don’t waste your time talking about something that doesn’t matter to your audience. Only keep going after everyone involved agrees on the problem.</p>

<p>If you want to end up on the same page, start on the same page.</p>

<h3 id="define-your-terms">Define your terms</h3>

<p>Another source of miscommunication I’ve seen is having different definitions for the same words. In the previous section, I gave the example of two software engineers wanting to make the app they’re working on “better”. One wanted to make it faster, and the other wanted to make it less buggy. If they both were clear about what improvements they wanted, they could figure out their disagreement right away. Instead, they used the same word “better” without defining it, so they were talking over each other.</p>

<p>This kind of confusion is weaponized all the time, especially in politically charged scenarios. You can never come to an agreement if both sides aren’t acting in good faith, but here’s what you can do on your side. First, be clear about what your terms mean, and that can entail choosing less ambiguous terms. The other is understanding how others interpret your words. What are their definitions?</p>

<p>And here’s the fun part: sometimes, you might already be agreeing because you’re using different words for the same concept! So define your terms.</p>

<h3 id="align-your-principles">Align your principles</h3>

<p>I’ve been talking about getting everyone on the same page, and on that theme, my last piece of advice is make sure everyone has the same starting assumptions and the same value systems.</p>

<p>Let’s bring up that example again of two app developers, one who wants to make the app faster and the other wants to make it less buggy. Both of them might be very reasonable, logical people, but they find themselves coming to completely different conclusions. Why is that? Two things might be happening. First, the developers might have different starting assumptions. One thinks the app isn’t even buggy to begin with, and the other thinks the app isn’t slow. Secondly, the developers may disagree on what’s important to users. Do users care about that last bit of speed, or those occasional bugs? Two completely logical people that start at different places and use different rules will of course come to different conclusions!</p>

<p>This is why big corporations care so much about measuring user behavior and having a unified culture. They want everyone at the company to start at the same point and head in the same direction. So the next time you have a disagreement, start by examining the other person’s assumptions and their values. You might still disagree, but your conversation will be much more fruitful.</p>

<h2 id="writing-style">Writing style</h2>

<p>You have the substance and the structure, next up is adopting the right style. While communication style should be personal to you, some basic principles apply.</p>

<h3 id="tell-me-a-story">Tell me a story</h3>

<p>One of the most powerful tools in your communication toolbox is storytelling. Humans have a rich tradition of storytelling and we respond well to these narratives. Think fables meant to teach people moral lessons and epics documenting history, however embellished they may be. Not every piece of communication will be a story, but don’t discount its place in your arsenal.</p>

<p>Case studies are a common example of storytelling in settings where you wouldn’t think stories belong. Early in my career, I solved a problem for my team and shared my learnings outside the company. To keep people interested, I set the scene by introducing the problem, then talked about how I solved that problem and new ones as they came up. It was a me vs. the problem narrative. This motivated why my solution looked the way it did, conveying my point more effectively than if I had just listed out what my solution did.</p>

<p>(<a href="https://youtu.be/JX-4_z2A_zs">Video of the presentation if you’re interested.</a>)</p>

<p>But even if you’re not talking about a real-world account, your communication can have elements of storytelling in it. When you give some background information, you’re setting a scene and introducing the characters. By going from general points to specifics and tying it back to your main point, you’re creating a beginning, middle and end, ensuring your audience remembers what you said.</p>

<p>You don’t have to write literature to tell stories. Get creative!</p>

<h3 id="write-and-re-write">Write and re-write</h3>

<p>When you have the luxury to do so, put down your thoughts, then edit them to be cohesive. I’ve given a lot of advice on how to best structure your communication, but if you don’t have something worth communicating, all the structure in the world won’t help you.</p>

<p>Here are some things I do when writing, including when creating this entire series:</p>

<ol>
  <li>Use outlines to get my thoughts listed out. That way, I know all the points I want to cover.</li>
  <li>Write out of order. Maybe I’ll write my conclusion, or a later point first, then come back to fill in the supporting arguments later.</li>
  <li>Write rambling sentences, then cut them down to their key points.</li>
  <li>Write a bunch of different paragraphs, then get rid of the ones that don’t feel necessary.</li>
</ol>

<p>Eventually, some of this will become second nature, and your first draft will look closer to your final product. But transforming what’s jumbled up in your head into something that other people can understand always takes some editing.</p>

<p>(I applied these techniques as I republished my scripts in blog form.)</p>

<h3 id="cut-the-fluff">Cut the fluff</h3>

<p>I’m going to keep this section short: cut the fluff.</p>

<p>In the last section, I talked about the importance of putting down your thoughts and editing them later. If you do this, you’ll find your first drafts verbose and unwieldy. That’s okay! But it does mean you have to be ruthless about removing anything that’s unnecessary.</p>

<p>Get rid of words that don’t add to your point. Get rid of sentences, paragraphs, even entire sections. Some flourishes are okay; they’re your style. But ask yourself: is this really something I need to keep?</p>

<p>This process is painful. You don’t really want to get rid of those beautiful words, right? But remember: the more you cut, the more impactful the remaining words will be.</p>

<h3 id="set-the-stage">Set the stage</h3>

<p>I’ve talked before about not explaining what <a href="#know-your-audience">your audience already knows</a>, because that’s boring, even patronizing. But that doesn’t mean you completely ignore those points. Instead, concisely summarize the background information you expect your audience to know, in order to provide the right context for the new material you’re about to present. The goal isn’t to actually explain that background material, just to reference it so you and your audience are on the same page.</p>

<p>In fact, I’m leading by example here! To set the stage for this section, I quickly referenced an earlier section about knowing your audience before jumping into the current thesis about establishing context.</p>

<p>As a software engineer, I’ve written a lot of technical documents. Each of those documents has a Background section. The section is usually only a paragraph or two, but it contains links to other material. Most of my readers will already know that material, so the section just jogs their memory. Anyone else can follow the links if they need to brush up on the context.</p>

<p>Give your audience a good starting point, and they’ll follow along much more easily!</p>

<h2 id="practicing-good-communication">Practicing good communication</h2>

<p>Even if you follow all the guidance in this post, you need to live and breathe communication for it to be effective. Remember, communication is a collaborative exercise.</p>

<h3 id="communicate-by-listening">Communicate by listening</h3>

<p>You can’t communicate effectively without listening.</p>

<p>There’s an idea floating around that some people want to be listened to, and some people want solutions. The truth is, even those who want solutions need to be listened to. As I mentioned in an earlier section, <a href="#agree-on-the-problem">Agree on the Problem</a>, you have to make sure you’re addressing the right problem. And for that, you have to listen to what the other person actually wants.</p>

<p>So how do you listen effectively? Here’s what you do:</p>

<ol>
  <li>
    <p>Try to understand the big picture while you listen to the small details. Not everyone knows how to word their problems in a way that makes sense to you, so you’ll have to hear each detail, read between the lines and extract the themes all of their words convey.</p>
  </li>
  <li>
    <p>Ask clarifying questions. Don’t interrogate them. Ask with curiosity so you can understand better. Reflect that curiosity in your tone so the other person doesn’t get defensive.</p>
  </li>
  <li>
    <p>Repeat your interpretation back to them so you know you’re on the same page. Make sure they agree you understand them.</p>
  </li>
</ol>

<p>There’s a common saying: measure twice, cut once. I say, listen twice, talk once.</p>

<h3 id="communicate-in-good-faith">Communicate in good faith</h3>

<p>I’ve talked a lot about what you can do to communicate clearly and without misunderstandings. I’ve even talked about how to make sure you don’t misunderstand the other person. But all of this assumes all parties are communicating in good faith.</p>

<p>What exactly does “in good faith” mean? It means everyone involved is trying to reach a common conclusion, even if it’s not the position they started with. But not everyone wants a shared understanding. They just want to win at all costs. You can see this in their communication style, which uses tactics such as, but not limited to:</p>

<ul>
  <li>
    <p>Not being consistent. If you give a counterpoint to something they say, they come back with a counter-counterpoint that contradicts their original argument! You keep going back and forth, but somehow, they just have to have that last word.</p>
  </li>
  <li>
    <p>Arguing against what they think you said, putting you on the defense for something you didn’t even want to talk about and making you clarify the same points again and again.</p>
  </li>
  <li>
    <p>Flooding you with tangents and even sources that don’t back up their argument, forcing you to do their research for them. By the time you have a response, they’ve moved on.</p>
  </li>
</ul>

<p>The common thread in all these tactics is the other person doesn’t want to communicate effectively. Recognizing that quickly is the key to cutting short that argument. Save your breath for someone who actually wants to talk to you!</p>]]></content><author><name>Avik Das</name></author><summary type="html"><![CDATA[The original logo from Talk it Out]]></summary></entry><entry><title type="html">Interactive demos using Astro</title><link href="https://avikdas.com/2023/12/30/interactive-demos-using-astro.html" rel="alternate" type="text/html" title="Interactive demos using Astro" /><published>2023-12-30T00:00:00+00:00</published><updated>2023-12-30T00:00:00+00:00</updated><id>https://avikdas.com/2023/12/30/interactive-demos-using-astro</id><content type="html" xml:base="https://avikdas.com/2023/12/30/interactive-demos-using-astro.html"><![CDATA[<p>This blog is mostly text and images, but I’m a big fan of adding interactive components to make my explanations more effective. See my post on <a href="/2020/09/08/rendering-curves-in-3d.html">rendering curves in 3D</a> as an example. For those interactive demos, the browser environment, with Javascript for the interactivity, is a fantastic delivery mechanism with wide reach and ease-of-use. Libraries like React make that easy, but a lot of the frameworks and tooling are built around the assumption that the end goal is a single-page app (SPA): the entire page is interactive, and page loads are handled by swapping out what’s on the page. Think Next.js, or the Vue equivalent, Nuxt.</p>

<p>That’s not the way I want my documents to operate. I’m not building web applications, just adding isolated interactive demos to an otherwise static medium. In the last year, I’ve discovered a great framework, <a href="https://astro.build">Astro</a>, that fits that exact niche. Usually, I prefer to avoid frameworks, but I have been happy enough with Astro to document my experience with it.</p>

<figure>
  <img src="/assets/images/2023-12-30-interactive-demos-using-astro/page-overview.png" width="300" alt="A drawing of a web page, most of which does not use Javascript. There are two interactive demos that do use Javascript." />
  <figcaption>The ideal web page for linear, explanatory text with some interactivity sprinkled in</figcaption>
</figure>

<h2 id="the-astro-approach">The Astro approach</h2>

<p>This is going to sound a bit like I’m writing marketing copy for Astro, but honestly, I found it refreshing that Astro’s philosophy aligned well with mine. Astro promotes content-heavy websites by rendering components on the server, then injecting only the necessary Javascript to make isolated “islands” of interactivity on the client side.</p>

<ol>
  <li>
    <p>Astro allows you to use any Javascript component library (React, Vue, Svelte, Lit, etc.), or Astro’s own component framework, to build a website. Regardless of what you choose, the Javascript is executed on the server to output static HTML. CSS pre-processors, like Sass, are also supported. The important piece is that <strong>no client-side Javascript is shipped</strong>. That means, unlike SPA frameworks, you get multiple pages with static HTML and CSS with links between them. At the same time, I still get to use components, allowing me to refactor common elements when coding.</p>
  </li>
  <li>
    <p>When I use a third-party component library like React, I can optionally mark it as a client component. The component is still rendered on the server, but the component is “hydrated” (brought to life with Javascript) on the client. I can enable this when the page loads or when the server-rendered HTML for the component is scrolled into view (Astro uses the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API">Intersection Observer API</a>). Either way, <strong>only the Javascript needed to enable these client-side components are shipped to the browser</strong>. This is the <a href="https://docs.astro.build/en/concepts/islands/">Island Architecture</a>. Note that it is possible to share state between islands, which I do in some limited cases.</p>
  </li>
</ol>

<p>(As a side note, I chose to use Astro components for anything server-only and Svelte for anything client-side. For my personal projects, I like Svelte’s approach of using a compiler to emit targeted DOM updates, as if I were using JQuery or vanilla JS.)</p>

<p>Both of these pieces of functionality are ones I could build myself, which I appreciate conceptually. Doing so in a framework-agnostic way, with Typescript, hot reload, etc. are what Astro brings to the table.</p>

<h2 id="what-ive-built-with-astro">What I’ve built with Astro</h2>

<p>Disclaimer: I haven’t used Astro professionally, though I would totally consider it if I worked at a startup where content-heavy microsites are needed with minimal fuss. However, I have thoroughly enjoyed using Astro for two of my personal projects, both focused around teaching.</p>

<p>First, <a href="https://cstheory.avikdas.com">Interactive Computer Science</a>, where I discovered Astro. I like using common server components for a consistent visual treatment across the website, for definitions and study tips for example. The client components are reserved for the interspersed interactive exercises and visualizations. I had a lot of fun building a full Turing machine simulator and its associated UX. Best of all, I was able to utilize the interactive exercises in <a href="/2023/07/17/my-two-semesters-of-teaching.html">my class</a>!</p>

<figure>
 <img src="/assets/images/2023-12-30-interactive-demos-using-astro/turing-machine.png" width="500" alt="Screenshot from Interactive Computer Science, showing a running Turing machine with the current configuration of the machine highlighted" />
 <figcaption>I used this Turing machine simulator as an interactive exercise during my lectures</figcaption>
</figure>

<p>Second, <a href="https://nesdev.avikdas.com">NES development on the web</a>. This is another content-heavy project, but the interactive visualizations are promiment. In particular, I was able to embed a Webassembly-based 6502 assembler and an NES emulator to allow writing 6502 assembly code and have it run right in the browser! Outside of this use case, I’m also using client components for the type of interactive visualizations I wish I had when learning Gameboy Advance programming before college, things like visualizing bit fields and other low-level data representations.</p>

<figure>
  <img src="/assets/images/2023-12-30-interactive-demos-using-astro/bit-representation-nesdev.png" width="400" alt="Screenshot from NES development, with a row of pixels at the top with various colors and their corresponding bit representations below" />
  <figcaption>When learning about how graphics are represented on the NES, readers can change the top row and see the bottom rows update in realtime</figcaption>
</figure>

<p>As with any framework, I have spent time wrangling Astro. But overall, Astro, Typescript, SASS and Svelte are all tools that have allowed me to focus on the content of my visualizations, not the infrastructure that powers them.</p>

<h3 id="deploymenthosting">Deployment/hosting</h3>

<p>This part isn’t specific to Astro, but if you ensure your server-rendered HTML is static (no per-user differences, no fetching data dynamically for each request, etc.), you can deploy to any host that supports static HTML. For my pet projects, I’ve been happy using <a href="https://www.fastmail.help/hc/en-us/articles/1500000280141-How-to-set-up-a-website">Fastmail’s static website feature</a>. I could also have used Github pages of course.</p>

<h2 id="why-not-convert-this-blog-to-astro">Why not convert this blog to Astro?</h2>

<p>It really feels like my blog is the perfect fit for Astro. To be honest, I think so too and I’m tempted to rewrite the entire blog using Astro instead of Jekyll. I can get rid of a bunch of hand-rolled Javascript and fully utilize a UI library like Preact as a first-class citizen (instead of just pulling it in via a CDN).</p>

<p>For now, however, I’m going to hold off, for the same reason I’m wary about all-in-one frameworks in general. The more dependencies there are, the more complicated both development and maintenance becomes. As a practical example, I have an item on my to-do list to upgrade the interactive CS website to the latest Astro, something that’s impeded on a conflict between the latest Typescript and the latest Astro. On the flip side, I feel like Jekyll, especially using it in conjunction with the default Github pages infrastructure, has been mostly set-and-forget. For my blog, I’m going to use “boring” technologies as much as possible. I want my blog to be <a href="https://dubroy.com/blog/cold-blooded-software/">cold-blooded software</a>.</p>

<p>And if it weren’t for the sheer density of interactivity in my online teaching material, I would consider ditching Astro for those projects too. I’ve enjoyed Astro, but I wish I could use fewer dependencies.</p>]]></content><author><name>Avik Das</name></author><summary type="html"><![CDATA[This blog is mostly text and images, but I’m a big fan of adding interactive components to make my explanations more effective. See my post on rendering curves in 3D as an example. For those interactive demos, the browser environment, with Javascript for the interactivity, is a fantastic delivery mechanism with wide reach and ease-of-use. Libraries like React make that easy, but a lot of the frameworks and tooling are built around the assumption that the end goal is a single-page app (SPA): the entire page is interactive, and page loads are handled by swapping out what’s on the page. Think Next.js, or the Vue equivalent, Nuxt.]]></summary></entry><entry><title type="html">Containerized services on a home server</title><link href="https://avikdas.com/2023/08/23/containerized-services-on-a-home-server.html" rel="alternate" type="text/html" title="Containerized services on a home server" /><published>2023-08-23T00:00:00+00:00</published><updated>2023-08-23T00:00:00+00:00</updated><id>https://avikdas.com/2023/08/23/containerized-services-on-a-home-server</id><content type="html" xml:base="https://avikdas.com/2023/08/23/containerized-services-on-a-home-server.html"><![CDATA[<p>With my <a href="/2023/08/21/setting-up-a-micro-pc-as-a-linux-server.html">mini PC server set up with Debian</a>, I prepared the server for actually running useful services. This time, I decided I would go all in with containers, hoping that will keep my applications self-contained enough that I don’t have to think about different applications stepping on each other.</p>

<p>I don’t claim to be an expert, and I’ve been piecing together this knowledge through many online resources. Like the last post, a lot of these are notes for myself.</p>

<p>One thing to note is I’m a very stubborn person, and a running theme is me doing things the non-standard way just on principle 😅</p>

<h2 id="podman-and-podman-compose">Podman and Podman Compose</h2>

<p>The first controversial decision is to use Podman instead of the industry-standard Docker. Podman attracted me because it doesn’t use a daemon-based architecture, meaning individual containers will run under specific users, instead of a single daemon typically running as root. I could also say I was concerned about <a href="https://blog.alexellis.io/docker-is-deleting-open-source-images/">Docker’s approach to monetization</a>, but Red Hat (makers of Podman) has <a href="https://arstechnica.com/information-technology/2023/06/red-hats-new-source-code-policy-and-the-intense-pushback-explained/">generated some controversy</a> lately as well. Mostly, I like the daemon-less architecture and thought this would be a good time to play around with some new technology.</p>

<p>Installing Podman, and the associated Podman Compose for small-scale container orchestration, is easy:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>podman podman-compose
</code></pre></div></div>

<p>Note that prior to Bookworm, the previous stable version of Debian had some pretty old versions of Podman and required <a href="https://github.com/containers/podman-compose#installation">installing Podman Compose manually</a>. Moreover, the old version of Podman meant you needed to install an older version of Compose from a branch. With Bookworm, I don’t have this problem.</p>

<h3 id="compatibility-with-docker">Compatibility with Docker</h3>

<p>With this setup, I can usually just use any <code class="language-plaintext highlighter-rouge">docker-compose.yml</code> file almost as-is. Instead of running <code class="language-plaintext highlighter-rouge">sudo docker-compose -f &lt;filename.yml&gt; up</code>, I just run <code class="language-plaintext highlighter-rouge">podman-compose -f &lt;filename.yml&gt; up</code>. Very convenient, thanks to the <a href="https://opencontainers.org/">Open Container Initiative</a> creating industry-wide standards that multiple tools can leverage. But there are two major differences I need to think about when adapting instructions for Docker to use Podman:</p>

<ul>
  <li>
    <p>A lot of Docker Compose files use image names that are not prefixed with the hostname of any container registry. This is because Docker is configured to default to <code class="language-plaintext highlighter-rouge">docker.io</code>, the Docker company’s official registry. I can configure Podman to do the same, but I like being explicit with my code and configuration. This means if an image is referenced without a registry hostname, I just have to prepend <code class="language-plaintext highlighter-rouge">docker.io/</code> to the name.</p>
  </li>
  <li>
    <p>At least as of Podman Compose 1.0.3, I found <code class="language-plaintext highlighter-rouge">.env</code> file handling not where I was expecting it to be. Generally, there are two ways these files are used, either to substitute values into the Compose file itself and to pass along environment variables into the running containers. Using the <a href="https://docs.docker.com/compose/compose-file/05-services/#env_file"><code class="language-plaintext highlighter-rouge">env_file</code> directive</a>, you can use a filename other than <code class="language-plaintext highlighter-rouge">.env</code>. However, I found that doing so prevent values from being substituted directly in the Compose file. For now, I’m making sure each service I want to configure has its own directory containing a default-name <code class="language-plaintext highlighter-rouge">.env</code> file when needed.</p>
  </li>
</ul>

<h3 id="configuring-inter-container-networking">Configuring inter-container networking</h3>

<p>When trying to set up some more complex applications, I found that containers were not able to resolve each other by container name. In trying to fix this, I tried a bunch of solutions, only to find that I needed to reboot (or probably run some command, but rebooting did the trick). So, I don’t know everything below is necessary, and it’s worth trying just the first command to see if that’s enough. Just remember to reboot!</p>

<p>First, install the <code class="language-plaintext highlighter-rouge">golang-github-containernetworking-plugin-dnsname</code> package. Theoretically, this should be enough, as it allows containers to DNS resolve each other by container name, as long as they are in the same virtual network:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>golang-github-containernetworking-plugin-dnsname
</code></pre></div></div>

<p>But, when I was trying to figure out the networking prior to rebooting, I saw some errors that prompted me to do the following:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>dbus-user-session
<span class="nb">sudo </span>systemctl <span class="nt">--user</span> start dbus
</code></pre></div></div>

<h3 id="rootless-logging">Rootless logging</h3>

<p>Another issue I encountered was errors around logging. This was especially relevant when I was trying to debug the inter-container networking issues I described above. I don’t know too much about this, but it seems like the standard <code class="language-plaintext highlighter-rouge">journald</code>-based logging requires some extra permissions. The way I ended up fixing the issues was to switch to file-based logging for the user in question (I talk more about the user setup below). For example, when setting up <a href="https://immich.app/">Immich</a>, I updated the container config as follows:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo</span> <span class="nt">-u</span> immich <span class="nb">mkdir</span> ~immich/.config/containers
<span class="nb">sudo</span> <span class="nt">-u</span> immich <span class="nb">cp</span> <span class="se">\</span>
  /usr/share/containers/containers.conf <span class="se">\</span>
  ~immich/.config/containers/containers.conf  <span class="c"># copy over the default config</span>
sudoedit <span class="nt">-u</span> immich ~immich/.config/containers/containers.conf
</code></pre></div></div>

<p>In this configuration file, set:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>events_logger = "file"
log_driver = "k8s-file"
</code></pre></div></div>

<p>It looks like I could have just <a href="https://serverfault.com/a/1011140">added the user in question to the <code class="language-plaintext highlighter-rouge">systemd-journal</code> group</a>. For now, I’m not bothering, but I’m willing to try it out the next time I encounter this problem.</p>

<p>EDIT (May 25, 2025): I tried adding a user to the <code class="language-plaintext highlighter-rouge">systemd-journal</code> group, and it worked! The original error I was getting when trying to run something like <code class="language-plaintext highlighter-rouge">podman logs</code> was:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Error: initial journal cursor: failed to get cursor: cannot assign requested address
</code></pre></div></div>

<p>Then, I ran, for one of my later services:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>usermod <span class="nt">-a</span> <span class="nt">-G</span> systemd-journal jitsi
</code></pre></div></div>

<p>After restarting the service, I was able to get the logs just fine. No need to  update the container config as described above.</p>

<h2 id="one-user-per-service">One user per service</h2>

<p>Using Podman’s rootless architecture, I decided that I’ll run each service as a separate user. Additionally, I wanted these users to be <em>system users</em>. Unlike regular users, system users don’t, by default, have a login shell, so they can’t be logged into. They also don’t show in a listing of login users, say in the login screen of a graphical installation. This latter point is moot for me because I didn’t install a GUI. Again, I’m making these choices on principle.</p>

<p>First, I added a <code class="language-plaintext highlighter-rouge">services</code> group to make it easier to easily give common permissions to all the service users. By default, system users are placed in the <code class="language-plaintext highlighter-rouge">nogroup</code> group, so I wanted a shared group for these users.</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>addgroup <span class="nt">--system</span> services
</code></pre></div></div>

<p>Next, I added the user. For example, when preparing to set up <a href="https://forgejo.org/">Forgejo</a>, I created a system user called <code class="language-plaintext highlighter-rouge">forgejo</code>. Two things to note are that I have to explicitly ask the user to be added to the <code class="language-plaintext highlighter-rouge">services</code> group, and I have to explicitly specify the home directory. By default, system users have their home directory set to <code class="language-plaintext highlighter-rouge">/nonexistent</code>, which doesn’t exist and is not created by the <code class="language-plaintext highlighter-rouge">adduser</code> command. I was hoping to get away with no home directory, but unfortunately, Podman stores its data in the running user’s home directory.</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>adduser <span class="se">\</span>
  <span class="nt">--system</span> <span class="se">\</span>
  <span class="nt">--comment</span> <span class="s1">'Forgejo system user'</span> <span class="se">\</span>
  <span class="nt">--home</span> /home/forgejo <span class="se">\</span>
  <span class="nt">--ingroup</span> services <span class="se">\</span>
  forgejo

<span class="c"># The above command should output the user ID of the new user. But if you</span>
<span class="c"># forget, you can check after the fact:</span>
<span class="nb">id </span>forgejo  <span class="c"># in this case, the ID is 102</span>
</code></pre></div></div>

<p>Next, I had set up subuids and subgids for the user. The way containers work is they <a href="https://www.funtoo.org/LXD/What_are_subuids_and_subgids%3F">run processes and create files/directories under “virtual users”</a>. This way, the container-specific processes and data don’t clash with existing users on the system. To do this, subuids and subgids allow reserving a large range of user and group IDs for the parent user to allocate as needed.</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Check the current range of subuids/subgids</span>
<span class="c"># Format is "username:startid:numids"</span>
<span class="nb">cat</span> /etc/subuid
<span class="nb">cat</span> /etc/subgid

<span class="c"># Adjust the command to use the next available range</span>
<span class="c"># Format is "startid-endid"</span>
<span class="nb">sudo </span>usermod <span class="nt">--add-subuids</span> 1001000000-1001999999 forgejo
<span class="nb">sudo </span>usermod <span class="nt">--add-subgids</span> 1001000000-1001999999 forgejo

<span class="c"># Confirm the subuids/subgids were added</span>
<span class="nb">cat</span> /etc/subuid
<span class="nb">cat</span> /etc/subgid
</code></pre></div></div>

<p>Finally, when running containers, I encountered errors related to the fact that the users running the containers were not logged in. The systemd login manager can start up a “user manager” for non-logged in users by enabling lingering:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Use the user ID of the user</span>
<span class="nb">sudo </span>loginctl enable-linger 102
</code></pre></div></div>

<p>Note that the home directory, the subuids/subgids and lingering would automatically be set up for non-system users. But again, on principle, these users <em>have</em> to be system users!</p>

<h2 id="systemd">Systemd</h2>

<p>With this setup, I can already start up a service using Podman Compose. For example, for Forgejo, I would run:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Run as the forgejo user</span>
<span class="c"># Run in daemon mode (in the background)</span>
<span class="nb">sudo</span> <span class="nt">-u</span> forgejo podman-compose <span class="nt">-f</span> /path/to/forgejo-compose.yml up <span class="nt">-d</span>
</code></pre></div></div>

<p>In fact, I would do exactly this to test out the service works. But, because Podman doesn’t use a global daemon, nothing exists to start up running containers after a system reboot (Docker supports this with the <a href="https://docs.docker.com/compose/compose-file/05-services/#restart"><code class="language-plaintext highlighter-rouge">restart</code> directive</a>). Instead, I use systemd to manage the application as a service. I start by creating a service configuration file called <code class="language-plaintext highlighter-rouge">forgejo.service</code>. A few things to note about this service are:</p>

<ul>
  <li>It runs as the <code class="language-plaintext highlighter-rouge">forgejo</code> user, under the <code class="language-plaintext highlighter-rouge">services</code> group and with the home directory as the working directory.</li>
  <li>All paths are absolute.</li>
  <li>By setting the dependency via the <code class="language-plaintext highlighter-rouge">After</code> and <code class="language-plaintext highlighter-rouge">Wants</code> directives, I ensure the service starts up on its own after a reboot, and that too at the right point in the system initialization.</li>
</ul>

<div class="language-ini highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nn">[Unit]</span>
<span class="py">Description</span><span class="p">=</span><span class="s">Forgejo self-hosted lightweight software forge</span>
<span class="py">After</span><span class="p">=</span><span class="s">network.target</span>
<span class="py">Wants</span><span class="p">=</span><span class="s">network.target</span>

<span class="nn">[Service]</span>
<span class="py">Type</span><span class="p">=</span><span class="s">oneshot</span>
<span class="py">RemainAfterExit</span><span class="p">=</span><span class="s">true</span>
<span class="py">User</span><span class="p">=</span><span class="s">forgejo</span>
<span class="py">Group</span><span class="p">=</span><span class="s">services</span>
<span class="py">WorkingDirectory</span><span class="p">=</span><span class="s">/home/forgejo</span>
<span class="py">ExecStart</span><span class="p">=</span><span class="s">/usr/bin/podman-compose -f /path/to/forgejo/forgejo.yml up -d</span>
<span class="py">ExecStop</span><span class="p">=</span> <span class="s">/usr/bin/podman-compose -f /path/to/forgejo/forgejo.yml down</span>

<span class="nn">[Install]</span>
<span class="py">WantedBy</span><span class="p">=</span><span class="s">multi-user.target</span>
</code></pre></div></div>

<p>I can install this service by placing the configuration file in the system-wide services directory, enabling the service and starting it up.</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Running this in /path/to/forgejo</span>
<span class="nb">sudo cp </span>forgejo.service /etc/systemd/system/forgejo.service
<span class="nb">sudo </span>systemctl <span class="nb">enable </span>forgejo.service

<span class="nb">sudo </span>systemctl start forgejo.service
<span class="nb">echo</span> <span class="nv">$?</span>  <span class="c"># Confirm the service started up correctly</span>
         <span class="c"># The return code should be 0</span>
</code></pre></div></div>

<p>At this point, the service will start up automatically after a reboot. If I want to stop or restart the service myself, I can do that too:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>systemctl restart forgejo.service   
<span class="nb">sudo </span>systemctl stop forgejo.service  
</code></pre></div></div>

<p>Finally, the <code class="language-plaintext highlighter-rouge">start</code> and <code class="language-plaintext highlighter-rouge">restart</code> commands are a bit of a black box, and you don’t get to see errors or other logs on the command line. Instead, you can use <code class="language-plaintext highlighter-rouge">journald</code> to view the logs. Unfortunately, this doesn’t include all the logging, namely the part where the container images are downloaded. Given that this part can take a long time, I suggest running <code class="language-plaintext highlighter-rouge">podman-compose</code> manually to download the images before running it via systemd.</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>journalctl <span class="nt">-fxeu</span> forgejo.service
</code></pre></div></div>

<p>All of this might seem like a disadvantage compared to Docker, but I prefer this system. I think it follows the Unix philosophy, letting Podman focus on containerization and systemd focus on service lifecycle.</p>

<h2 id="reverse-proxy">Reverse proxy</h2>

<p>There has been a lot of setup, but we’re almost done. The last part is making the service available on the internet, so I can access it when I’m not at home. I could definitely use a self-hosted VPN, and I might do that for some services in the future, but I want to share some of these services with other people.</p>

<p>The basic setup has a few parts:</p>

<ol>
  <li>I own some domains, so I use a subdomain for each service pointing to my home IP address.</li>
  <li>My router is set up to forward specific ports to my server.</li>
  <li>On the server, the <a href="https://caddyserver.com/">Caddy web server</a> redirects to different internal ports based on the subdomain being accessed.</li>
</ol>

<p>Here’s the final architecture, which I’ll describe in more detail below:</p>

<figure>
  <img src="/assets/images/2023-08-23-containerized-services-on-a-home-server/network-architecture.svg" width="600" alt="Diagram of the architecture described above: the DNS provider uses an A record to point to my home router, which uses port forwarding to point to Caddy, which acts as a reverse proxy to my internal services" />
</figure>

<h3 id="subdomains-point-to-my-home-ip-address">Subdomains point to my home IP address</h3>

<p>This part is pretty straightforward. I just log into my domain registrar’s DNS settings and create a new subdomain, set up as an A record. Generally my IP address doesn’t change frequently, but it is technically dynamic, so I want to automatically update the A record when my IP address changes. To do this, I use <a href="https://ddclient.net/">DDclient</a>.</p>

<p>The exact details of how to set up DDclient will depend on your DNS provider, but you should get a configuration dialog during installation or if you manually reconfigure:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>ddclient

<span class="c"># To reconfigure later</span>
<span class="nb">sudo </span>dpkg-reconfigure ddclient

<span class="c"># Or manually edit the configuration file</span>
sudoedit /etc/ddclient.conf

<span class="c"># Don't forget to manually refresh</span>
<span class="nb">sudo </span>ddclient
</code></pre></div></div>

<p>What I like to do is set up my subdomain to point to <code class="language-plaintext highlighter-rouge">0.0.0.0</code>, update the configuration to include the new subdomain and refresh. This way, I can verify the subdomain is going to update correctly if my IP address changes.</p>

<h3 id="using-caddy-as-a-reverse-proxy">Using Caddy as a reverse proxy</h3>

<p>I want all the services on my server to be available over port 443, instead of having to specify the port when accessing most of the services. Additionally, I don’t want to have individual containers bind to ports like 443, which would require the service users have additional privileges. The way to do this requires a few steps:</p>

<ol>
  <li>
    <p>Configure my router to forward ports 80 and 443 to my server.</p>
  </li>
  <li>
    <p>Use <a href="https://caddyserver.com/">Caddy</a> with virtual domains as a reverse proxy to the services. Caddy is the only service on the system listening on ports 80 and 443. I like Caddy for this simple use case because, unlike Nginx, the configuration is simple and I don’t have to separately configure Certbot to provision Let’s Encrypt HTTPS certificates.</p>
  </li>
  <li>
    <p>Ensure that services that expose ports only expose non-privileged ports, ones greater than 1024. For example, internally, a service might bind to port 80 inside the container but expose that to 3000. This is something I have to check in the Podman Compose configuration files, because a lot of times, they try to expose privileged ports. I also make sure to <em>not</em> enable HTTPS for that service if that’s an option.</p>
  </li>
</ol>

<p>After configuring my router’s port forwarding and starting up a service on a non-privileged port, I installed Caddy:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>caddy
</code></pre></div></div>

<p>Before configuring any specific services, I need to add some global configuration. Opening up <code class="language-plaintext highlighter-rouge">/etc/caddy/Caddyfile</code>, I commented out the default configuration and added the following:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
    # Used primarily as the email to associate with Let's Encrypt certificates,
    # in case any communications are needed.
    email my_email@example.com
}
</code></pre></div></div>

<p>Now, I can add service-specific configuration, one block per service. Almost all the services are similar:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mysubdomain.mydomain.com {
    # Point to whatever port the internal service exposes
    reverse_proxy :3000
}
</code></pre></div></div>

<p>By default, since I don’t specify a protocol (for example <code class="language-plaintext highlighter-rouge">http://</code>), Caddy defaults to HTTPS and provisions a Let’s Encrypt certificate for this domain. This works automatically as long as port 80 on my router is being forwarded to this Caddy instance.</p>

<p>Now, I just restart Caddy and I’m good to go:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>systemctl restart caddy
</code></pre></div></div>

<p>Note that many services allow you to specify what hostname they will run on. This is typically configured as an environment variable or as part of a configuration file. Among other reasons, configuring the hostname is useful for display purposes within the application.</p>

<figure>
  <p><img src="/assets/images/2023-08-23-containerized-services-on-a-home-server/forgejo-clone-url.png" alt="The clone URL displayed for one of my git repos on my Forgejo instance, showing the hostname I configured" /></p>
  <figcaption>The clone URL displayed for my git repos on my Forgejo instance shows the correct domain due to application-specific configuration</figcaption>
</figure>

<h2 id="tooling-for-managing-services">Tooling for managing services</h2>

<p>Because I have to customize my application installations with details such as file paths, expose ports and user information, I created some tooling to manage these installations. The tooling is straightforward:</p>

<ul>
  <li>
    <p>Each service’s configuration is stored in its own directory. The directory typically consists of the Podman Compose file, the systemd service file and optionally, a <code class="language-plaintext highlighter-rouge">.env</code> file.</p>
  </li>
  <li>
    <p>The parent directory for these services contains a script to copy over the systemd service file to the right place and a README with useful commands. All of this serves as a reminder for myself how to install and manage these services.</p>
  </li>
  <li>
    <p>These files are managed using git and stored on my Forgejo instance. Meta!</p>
  </li>
</ul>

<p>I’m not sharing the repo because I don’t want to share all the specific details of my server setup, like file paths and hostnames.</p>

<hr />

<p>With these steps, I’m happy with how isolated each service is, how it automatically starts up with the machine and how little extra maintenance is needed once I get a service running. Even getting the service installed in the first place is easy thanks to containerization. I installed two services recently in just a few minutes.</p>

<p>Nothing about these steps are revolutionary, as they use off-the-shelf tools combined together exactly as they are meant to be. Having this documented here hopefully helps others understand the larger ecosystem of tools and how they can be put together to spin up a useful, low-maintenance home server.</p>]]></content><author><name>Avik Das</name></author><summary type="html"><![CDATA[With my mini PC server set up with Debian, I prepared the server for actually running useful services. This time, I decided I would go all in with containers, hoping that will keep my applications self-contained enough that I don’t have to think about different applications stepping on each other.]]></summary></entry><entry><title type="html">Setting up a micro PC as a Linux server</title><link href="https://avikdas.com/2023/08/21/setting-up-a-micro-pc-as-a-linux-server.html" rel="alternate" type="text/html" title="Setting up a micro PC as a Linux server" /><published>2023-08-21T00:00:00+00:00</published><updated>2023-08-21T00:00:00+00:00</updated><id>https://avikdas.com/2023/08/21/setting-up-a-micro-pc-as-a-linux-server</id><content type="html" xml:base="https://avikdas.com/2023/08/21/setting-up-a-micro-pc-as-a-linux-server.html"><![CDATA[<figure>
  <p><img src="/assets/images/2023-08-21-setting-up-a-micro-pc-as-a-linux-server/machine-front.jpg" alt="The front of the Dell OptiPlex 7040 Micro" /></p>

  <figcaption>My little server, sitting next to my router</figcaption>
</figure>

<p>This blog started at the end of 2018 as a way to document <a href="/2018/12/31/setting-up-lcd-screen-on-raspberry-pi.html">how I set up my Raspberry Pi</a>. Some time ago, the Pi finally broke down, and I’ve had terrible luck with Micro SD card corruption. After a few unsuccessful attempts to get the Pi running again, I picked up a used <a href="https://www.dell.com/support/manuals/en-us/optiplex-7040-desktop/opti7040m_om/specifications?guid=guid-a33190d8-64df-4b91-a7f1-def96c724916&amp;lang=en-us">Dell OptiPlex 7040 Micro</a>. Here are my notes on getting this server set up, since I’ve learned a thing or two in the last 4.5 years. Most are notes for myself.</p>

<p>Today’s post will cover getting the server hardware set up and Linux installed. I’ll cover more about the home server capabilities in later posts.</p>

<h2 id="hardware">Hardware</h2>

<p>As mentioned above, I’m using a Dell OptiPlex 7040 Micro because that’s what I found on Craigslist. Given that I was happy with a Raspberry Pi 3B+ in the past, this system is overkill. But, it is nice having a modular, upgradeable system compared to a System on a Chip (SoC).</p>

<p>To that point, I picked up a 1TB 2.5” SATA SSD to swap into the system, and I installed an older 512GB M.2 NVMe SSD I had lying around (it was one I thought had died, so I had replaced it on a different machine, but it turned out to be working fine). I’m using the NVMe drive for the OS installation and <code class="language-plaintext highlighter-rouge">/home</code> partition, and the SATA drive for the actual storage. For example, I installed <a href="https://immich.app/">Immich</a> to back up my photos, and I’m using the SATA drive for storing the photos.</p>

<p>It’s nice to have these types of hardware slots inside the small form factor, compared to using a USB drive sticking out of my Raspberry Pi.</p>

<h3 id="networking">Networking</h3>

<p>The Dell OptiPlex 7040 does support wireless, namely WiFi and Bluetooth, but the unit I picked up didn’t have the necessary hardware installed. I did some research on installing an M.2 wireless module. Ultimately, it’s not that important to me because I placed the machine near my router with an ethernet connection, so I passed on adding this hardware.</p>

<h2 id="installing-debian">Installing Debian</h2>

<p>Having started my Linux journey with Ubuntu, I now run Debian on my personal laptop. I went with Debian for the server as well. The newest stable release, Bookworm, recently came out, so I’m okay with stable for now and will update to testing if I feel anything has gotten too old.</p>

<p>I didn’t install a graphical desktop environment because I wanted the machine to be a headless server. I configured an SSH server during installation, so I can remotely log into my server. Just as importantly, I <em>didn’t</em> configure a web server. Debian’s default is to use Apache, and I prefer to use Caddy or Nginx for my relatively meager needs.</p>

<p>Overall, installing Debian with the graphical installer was straightforward, but there were a few additional things I needed to do.</p>

<h3 id="configuring-efi">Configuring EFI</h3>

<p>It seems there’s a bug with the EFI firmware on the OptiPlex specifically related to the NVMe drives (everything worked out of the box when I initially installed Debian on the original SATA drive). Basically, Debian puts its EFI binary in <code class="language-plaintext highlighter-rouge">/boot/efi/EFI/debian/grubx64.efi</code>. Even after going into the boot settings on the machine and changing the EFI binary path, the machine seemed to be looking in a default location of <code class="language-plaintext highlighter-rouge">/boot/efi/EFI/boot/bootx64.efi</code>, causing the machine to think there was no installed OS.</p>

<p>Fixing that was simple once I figured out the issue, with the following steps.</p>

<p>Start up the Debian graphic installer and open a terminal session. When prompted, mount <code class="language-plaintext highlighter-rouge">/dev/nvme0n1p1</code> as that’s the boot partition. Then, copy over the EFI binary from the original location to where the machine expects it:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cd</span> /boot/efi/EFI
<span class="nb">mkdir </span>boot
<span class="nb">cp </span>debian/grubx64.efi boot/bootx64.efi
</code></pre></div></div>

<p>Reboot, and Grub should start right up.</p>

<h3 id="configuring-sudo-access">Configuring sudo access</h3>

<p>I didn’t want to keep switching to a root user for all my administration, so I set up <code class="language-plaintext highlighter-rouge">sudo</code>:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>avik$ su -
root# apt install sudo
root# usermod -aG sudo avik
</code></pre></div></div>

<h3 id="useful-software">Useful software</h3>

<p>Again, this is mostly for myself, and it’s pretty much the same software I make sure to install on any Debian machine I own:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>apt <span class="nb">install </span>command-not-found tmux
<span class="nb">sudo </span>apt <span class="nb">install </span>vim
<span class="nb">sudo </span>apt <span class="nb">install </span>curl git

<span class="nb">sudo </span>apt update  <span class="c"># generate command-not-found index</span>
</code></pre></div></div>

<p>Also, adding <code class="language-plaintext highlighter-rouge">export EDITOR=vim</code> to my <code class="language-plaintext highlighter-rouge">.bashrc</code> ensured that <code class="language-plaintext highlighter-rouge">sudoedit</code> (to edit files as root without running your editor as root) uses Vim.</p>

<p>As I’ll talk about in a later post, I will mostly use containers to run software. But historically I’ve used <a href="https://asdf-vm.com/">asdf</a> and some plugins for <a href="https://github.com/asdf-vm/asdf-nodejs">node.js</a>, <a href="https://github.com/asdf-vm/asdf-ruby">Ruby</a> and <a href="https://github.com/asdf-community/asdf-python">Python</a>. I installed those too, probably out of habit.</p>

<h3 id="configuring-the-storage-drive">Configuring the storage drive</h3>

<p>When I install Debian, I always separate the <code class="language-plaintext highlighter-rouge">/home</code> partition. If I do this during a new install, then the installer can set up the system to automatically mount the right partition as my <code class="language-plaintext highlighter-rouge">/home</code> directory. If I want to preserve an existing <code class="language-plaintext highlighter-rouge">/home</code> partition, I’d have to set up the auto-mounting myself.</p>

<p>Either way, I also wanted to auto-mount the SATA storage drive. First, I had to decide which file system to use for the storage drive, and I went with Ext4 for simplicity. I would have enjoyed trying ZFS, but without native support in Linux (that I could find), I stuck with whatever was well supported:</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo </span>fdisk <span class="nt">-l</span>  <span class="c"># find the device for the drive</span>
<span class="nb">sudo </span>mkfs <span class="nt">-t</span> ext4 /dev/sda1  <span class="c"># format that device</span>
</code></pre></div></div>

<p>Now to automount the drive (and these steps generally apply for a <code class="language-plaintext highlighter-rouge">/home</code> partition as well):</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">sudo mkdir</span> /mnt/storage  <span class="c"># create the mount point</span>

<span class="nb">sudo </span>blkid  <span class="c"># figure out the UUID of the drive</span>
sudoedit /etc/fstab  <span class="c"># see below for what to add</span>
<span class="nb">sudo </span>systemctl daemon-reload  <span class="c"># pick up changes to /etc/fstab</span>
<span class="nb">sudo </span>mount <span class="nt">-a</span>  <span class="c"># mount!</span>
</code></pre></div></div>

<p>When editing <code class="language-plaintext highlighter-rouge">/etc/fstab</code>, add the following line:</p>

<div class="language-conf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># UUID is based on the output of blkid
# dump=0 - I'm not sure what exactly this does
# pass=2 - `man fstab` says use "2" for non-root filesystems
</span><span class="n">UUID</span>=...  /<span class="n">mnt</span>/<span class="n">storage</span>  <span class="n">ext4</span>  <span class="m">0</span>  <span class="m">2</span>
</code></pre></div></div>

<p>EDIT (Sep 5, 2023): I got a suggestion about an alternate way to identify disks in <code class="language-plaintext highlighter-rouge">/etc/fstab</code>, which took me down a rabbit hole. Here’s what I found:</p>

<ul>
  <li>
    <p>Firstly, this was something I already knew, but you don’t want to use device identifiers like <code class="language-plaintext highlighter-rouge">/dev/sda1</code>. There’s no guarantee these will stay the same across boots. That’s why I used the UUID. The UUID is stable, at least until reformatting.</p>
  </li>
  <li>
    <p>The suggestion was to use the paths inside of <code class="language-plaintext highlighter-rouge">/dev/disk/by-id</code>. These are symlinks to files like <code class="language-plaintext highlighter-rouge">/dev/sda1</code>, but the filenames are human readable. For example, instead of a UUID like I’m using, my SATA SSD partion would be named <code class="language-plaintext highlighter-rouge">ata-INTEL_SSDSC2KB960G8_BTYF92160AB7960CGN-part1</code>. Definitely nicer! This seems like the way to go, and I’ll try it out in the future.</p>
  </li>
  <li>
    <p>As always, the <a href="https://wiki.archlinux.org/title/fstab#Identifying_file_systems">Arch wiki</a> is fantastic, even for non-Arch users. Note that this wiki page doesn’t talk about the approach mentioned above, but the <a href="https://bbs.archlinux.org/viewtopic.php?id=261988">forums sure discuss it at length</a>!</p>
  </li>
</ul>

<hr />

<p>With all these changes above, I now have a running Debian server that I can start playing around with. Next up is how I installed the right software to make the server useful!</p>]]></content><author><name>Avik Das</name></author><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">My two semesters of teaching</title><link href="https://avikdas.com/2023/07/17/my-two-semesters-of-teaching.html" rel="alternate" type="text/html" title="My two semesters of teaching" /><published>2023-07-17T00:00:00+00:00</published><updated>2023-07-17T00:00:00+00:00</updated><id>https://avikdas.com/2023/07/17/my-two-semesters-of-teaching</id><content type="html" xml:base="https://avikdas.com/2023/07/17/my-two-semesters-of-teaching.html"><![CDATA[<p>In 2022, I got the opportunity to live out my dream of teaching a university course, and that too in theoretical computer science. Despite how busy my day job was, I knew I had to take that opportunity or I would regret it. After teaching for two semesters, I found the experience both exhilarating and too much of a time commitment to continue next semester. To get into the habit of writing again, I want to reflect on that experience.</p>

<p>Disclaimer: these are my thoughts after just two semesters of teaching, and I don’t mean for this to be any sort of “words of wisdom”. For that reason, I’ll keep my thoughts light. If anyone with more experience wants to weigh in, I would love to hear your thoughts!</p>

<ol>
  <li>
    <p><strong>There’s a lot of trial-and-error.</strong> I thought teaching required credentials and apprenticeship, the way I saw <a href="https://en.wikipedia.org/wiki/Student_teacher">student teachers</a> practice teaching in high school. Instead, I was given pretty much free reign to teach how I wanted, as long as I submitted grades at the end of the semester. I found it simultaneously freeing to have that autonomy and scary to be trusted to that degree. But, even if I had the credentials, I would still need to adapt my teaching style every semester based on some (informed) trial-and-error. I want to give a huge thanks to my mentor at the same university who guided me on the course design.</p>
  </li>
  <li>
    <p><strong>Going the extra mile is really expensive.</strong> Students appreciated my timely grading, detailed feedback and copious office hours throughout the semester. I wanted students to have as many resources as possible. For example, homework assignments were due as late as possible on the Tuesday before a Thursday exam, and I tried to finish grading by midday on Wednesday so students could use that feedback to study for the exam. Unfortunately, doing this takes a lot of effort, and probably is the primary source of me burning out on teaching. I don’t blame teachers who prioritize ease of instruction over individualized support.</p>
  </li>
  <li>
    <p><strong>It’s hard to teach within a broken system.</strong> And by system, I mean all of education, not the institution. Students often take a full course load while working full-time due to financial constraints, something I never had to do because of my privilege. They also were not always prepared thoroughly by previous classes, again something I was privileged enough to not worry about because my parents could afford the rent needed to send me to well-funded schools and I had the time to focus on my academics even before college. No matter how much effort I put into teaching, I can’t help someone who doesn’t have the 10-12 hours a week needed to truly learn the material.</p>
  </li>
  <li>
    <p><strong>Inclusive policies can help decrease the burden.</strong> Recordings for all lectures and office hours, open book exams, flexible deadlines if someone asked… all of these prevented the need for additional scrutiny on my part to determine if someone was “worthy” of an accommodation. Sure, if someone had medical documentation, they could request such accommodations via the university, but <a href="https://www.youtube.com/watch?v=7BG_C8E9fKI">inclusive policies benefit those who can’t get a formal diagnosis or are afraid of retaliation</a>. If I kept teaching, I would continue finding ways to extend “accommodations” to all students by default, both to make my life easier and because these accommodations are, as the Speech Prof says, just good teaching practices.</p>
  </li>
</ol>

<p>I once heard that the first year of teaching is just learning to keep your head above water, and I had to give up before I got into the groove, apparently. That does mean the above reflections are based on very little experience. But to be clear: I loved teaching, and I intend to find my way back to it.</p>]]></content><author><name>Avik Das</name></author><summary type="html"><![CDATA[In 2022, I got the opportunity to live out my dream of teaching a university course, and that too in theoretical computer science. Despite how busy my day job was, I knew I had to take that opportunity or I would regret it. After teaching for two semesters, I found the experience both exhilarating and too much of a time commitment to continue next semester. To get into the habit of writing again, I want to reflect on that experience.]]></summary></entry><entry><title type="html">“It’s not peaches and cream either for men”</title><link href="https://avikdas.com/2021/11/29/its-not-peaches-and-cream-either-for-men.html" rel="alternate" type="text/html" title="“It’s not peaches and cream either for men”" /><published>2021-11-29T00:00:00+00:00</published><updated>2021-11-29T00:00:00+00:00</updated><id>https://avikdas.com/2021/11/29/its-not-peaches-and-cream-either-for-men</id><content type="html" xml:base="https://avikdas.com/2021/11/29/its-not-peaches-and-cream-either-for-men.html"><![CDATA[<p>I spend a lot of time talking about men’s mental health because it’s what I, as a man, know about. And like with everything, the truth is complicated. We live in a patriarchal society that privileges men in certain ways, but also hurts them in other ways. The harmful effects on our heavily gendered society especially show themselves when racial or class oppression enter the picture. But ultimately, I have to listen to the experiences of others to piece together this complicated web of good and bad. So, when I came across <a href="https://www.washingtonpost.com/news/local/wp/2018/07/20/feature/crossing-the-divide-do-men-really-have-it-easier-these-transgender-guys-found-the-truth-was-more-complex/"><em>Crossing the Divide</em></a>, four accounts of transgender men who have experienced life being treated as women and now living as men, I was fascinated.</p>

<p>You should read the article to see how gender in our society isn’t clear cut. But, I do want to expand on some areas where we can support men better.</p>

<p>(The title of this post is a quote from Trystan Cotten in the article.)</p>

<h2 id="racial-barriers">Racial barriers</h2>

<p>One of the contributors, Trystan Cotten talks about how being African America affected his life experiences pre- and post-transition. Cotten says it beautifully: “Life doesn’t get easier as an African American male. The way that police officers deal with me, the way that racism undermines my ability to feel safe in the world, affects my mobility, affects where I go.” This lack of safety is gendered too, as he mentions how he did not get pulled over or was let off pre-transition, but post-transition, his increased interactions with the police start with him being asked if he has any weapons.</p>

<p>Alex Poon talked about his genetics as a Chinese man not setting him up to have a “lumberjack-style” beard, underlying a fear that his stereotypically feminine facial features will impede his masculine presentation.</p>

<p>Both these stories make it clear why racial equality is needed. Whether it’s Black Lives Matter, or representation of Asian men in media, gender equality can’t be achieved without racial equality. Regardless of your gender identity or your race, <strong>if you want to create a society that supports men, support racial justice reform in all its forms</strong>.</p>

<h2 id="support-systems">Support systems</h2>

<p>Cotten also points out how there aren’t spaces for men to share their mental struggles. Contrasting his experience in gay, feminist and women’s circles, “there was a space and place you could talk about your feelings. In the last, you know, 10 years or so [post-transition] I can’t find those spaces necessarily for men, and I don’t know if men necessarily make those spaces for each other.”</p>

<p>And it’s not just a responsibility for those of us who are struggling, because we can’t share if no one is listening. Both of the other contributors, Zander Keig and Chris Edwards, talk about how society became less friendly toward them once they transitioned. Keig sums it up well: “What continues to strike me is the significant reduction in friendliness and kindness now extended to me in public spaces. It now feels as though I am on my own: No one, outside of family and close friends, is paying any attention to my well-being.”</p>

<p>Once again, <strong>we can create a better society for all by creating safe and inclusive environments for men</strong>:</p>

<ul>
  <li>
    <p>Some of that is on men who are already creating communities, for example by adopting the right community rules to ban toxicity and allowing (respectful) discussion of topics like mental health. Often, communities for men are taken over by trolls shifting the conversation to blaming women, instead of focusing on the problems men face in an overly gendered society. Community builders need to keep the conversation on a productive discussion of men’s issues.</p>
  </li>
  <li>
    <p>Some of the responsibility falls on everyone who is trying to create social safety nets. As Keig points out from his experience in social work, “when I would suggest that patient behavioral issues like anger or violence may be a symptom of trauma or depression, it would often get dismissed or outright challenged. The overarching theme was ‘men are violent’ and there was ‘no excuse’ for their actions.” Men who behave violently do need to be held accountable for their actions, but we also need to provide better mental health services that understand how those men end up acting violently in the first place.</p>
  </li>
</ul>

<hr />

<p>The stories make it clear there are societal advantages for men, so I don’t want to suggest women have “made it” in our society. But for many men, especially but not limited to those in marginalized groups, the picture isn’t rosy. We need to create a society that treats everyone as valuable, regardless of other factors like race. We need to create support systems to elevate all those in need, at times taking into account the specific needs men have. Only then can we create a society that supports men.</p>

<p>For what it’s worth, I raise awareness for a men’s health charity called <a href="https://movember.com">Movember</a> because mental health is really important to me, and men experience mental health struggles in a specific way that’s deep rooted in our culture of tough masculinity. If you want to help, please <a href="https://conversations.movember.com/en-us/">reach out to a friend</a>, participate in a Movember event to keep the conversations going, or <a href="https://mobro.co/akdas">donate to Movember</a>. Let’s save some lives!</p>]]></content><author><name>Avik Das</name></author><summary type="html"><![CDATA[I spend a lot of time talking about men’s mental health because it’s what I, as a man, know about. And like with everything, the truth is complicated. We live in a patriarchal society that privileges men in certain ways, but also hurts them in other ways. The harmful effects on our heavily gendered society especially show themselves when racial or class oppression enter the picture. But ultimately, I have to listen to the experiences of others to piece together this complicated web of good and bad. So, when I came across Crossing the Divide, four accounts of transgender men who have experienced life being treated as women and now living as men, I was fascinated.]]></summary></entry><entry><title type="html">It’s okay to not be okay</title><link href="https://avikdas.com/2021/11/02/its-okay-to-not-be-okay.html" rel="alternate" type="text/html" title="It’s okay to not be okay" /><published>2021-11-02T00:00:00+00:00</published><updated>2021-11-02T00:00:00+00:00</updated><id>https://avikdas.com/2021/11/02/its-okay-to-not-be-okay</id><content type="html" xml:base="https://avikdas.com/2021/11/02/its-okay-to-not-be-okay.html"><![CDATA[<p>What I’m about to say applies to everybody, but with <a href="https://movember.com">Movember</a> and my own experience as a man in mind, I hope my words will at least be useful to the men who read this. <strong>It’s okay to not be okay.</strong></p>

<p>The last year and a half have been damaging to all of us. Losing a job makes us feel like less of a provider, and the pandemic has been profoundly isolating. Worse still, for some of us, this isolation has not even been anomalous. Exaggerated maybe, but not anomalous. And I’m sure for many of us, there has been a time in our lives where we latched onto ideologies that ultimately hurt us as we looked for connection. The polarization and echo chambers enabled by social media have made this self-destructive behavior easier than ever.</p>

<p>I’m here to say from personal experience, it’s okay to not be okay. <strong>Your need for meaningful connection and personal autonomy is valid.</strong> Feeling overwhelmed is valid. Feeling like you’re drowning in the expectations of others is valid. Feeling like things are not going your way is valid. Feeling like no one gives you the attention you need is valid. We don’t have to tough it out. Asking for help and being vulnerable won’t make you less of a man.</p>

<p>That’s it. No solutions right now, no advice on what to do next. Just acknowledgement that your feelings are valid.</p>

<hr />

<p>I raise awareness for Movember because mental health is really important to me, and men experience mental health struggles in a specific way that’s deep rooted in our culture of tough masculinity. If you want to help, <a href="https://conversations.movember.com/en-us/">reach out to a friend</a>, participate in a Movember event to keep the conversations going, or <a href="https://mobro.co/akdas">donate to Movember</a>. Let’s save some lives!</p>]]></content><author><name>Avik Das</name></author><summary type="html"><![CDATA[What I’m about to say applies to everybody, but with Movember and my own experience as a man in mind, I hope my words will at least be useful to the men who read this. It’s okay to not be okay.]]></summary></entry></feed>