177698943
submission
BitterEpic writes:
This isn’t a conspiracy theory—it’s been covered by outlets like NPR, Newsweek, and USA Today: Democratic organizations actually spent money to promote Trump-aligned Republicans in GOP primaries. Why? The idea was to elevate “unelectable” opponents who’d be easier to beat in general elections. Sounds clever—unless the plan backfires. And with Trump winning in 2016 and still holding serious political sway, it’s worth asking: Did Democrats help create the very threat they claim to fear?
If Democrats truly believe Trump is an existential threat to democracy, why play with fire? Promoting candidates they think are too extreme to win assumes voters will always choose “correctly.” That’s not only arrogant—it’s dangerous. If he wins again, that strategy looks more like sabotage than strategy. Let’s also be honest: a lot of people who voted for Trump probably didn’t even like him. They just saw a bad system and chose the person they thought might shake it up. If Democrats helped make him the only viable alternative, that’s not just a Republican problem. It’s an American one.
I'm a big fan of ranked-choice voting. It gives people more options and weakens the two-party death grip that lets tactics like this work in the first place. If voters weren’t so locked into “lesser of two evils” thinking, parties wouldn’t be able to rig the system this way.
Serious question for Slashdotters: If you donated to the DNC or supported these tactics, do you think it was worth it? Do you think boosting Trump-aligned candidates was a responsible strategy? There are a lot of political comments here and I'm genuinely curious.
177665427
submission
BitterEpic writes:
I’m going to be honest: TypeScript/Javascript isn’t so bad.
I’ve been doing software engineering for a while now. Most of my current work revolves around using TypeScript and AWS to help companies struggling with inefficient software stacks and a lack of engineering expertise. I’ve been consistently surprised by how often legacy systems — built with bulky, slow, and overly complex stacks like Java or C# combined with some version of SQL — bring engineering velocity to a grinding halt.
One client told me it took them three months to implement MFA. Java. Another couldn't get acceptable draw performance from a canvas app. C#. My own TypeScript solutions routinely outperform these setups — with less money and less development time.
The Strengths of TypeScript
TypeScript has clear strengths. Development is fast, and modern runtimes mitigate many of the downsides you’d expect from a scripting language. TypeScript and JavaScript have also evolved to meet other languages in the middle. The syntax and structure borrow from C, Go, and Python. Honestly, you could argue that programming languages are converging.
TypeScript is also a relatively small and straightforward language. Like Python, it favors simplicity, which makes it approachable and easy to teach. It can also be used in your infrastructure, web frontend and backend.
AWS + TypeScript: A Natural Fit
If you’re working in AWS, TypeScript (or JavaScript) is often your best option. Unless you’re using C, C++, or Rust, you'll consistently see faster cold start times with TypeScript compared to most other languages. JIT engines and garbage collection in languages like Java and C# add significant startup time — sometimes up to three seconds.
Consider this: for 100,000 one-second AWS Lambda invocations, using concurrency on a single Lambda function will cost around $13.14 with Java/C#, versus just $1.69 using a lighter runtime like Node.js. Multiply that across services and scaling events, and costs add up quickly — not to mention the UX hit when users start wondering whether your app is down.
While the V8 engine is great, it’s not optimized for small Lambda environments (e.g., 128MB RAM). Fortunately, new engines like QuickJS offer alternatives. At just 367KB compared to Node.js at over 80MB, QuickJS skips the JIT and focuses on fast cold starts. In my experience, performance is still more than acceptable for many use cases. In fact, using a lightweight runtime like LLRT, I’ve seen API call latency drop from 1.4s to 700ms — fast enough that you don’t need a spinner. After warm-up, those calls often return in 200–300ms depending on your memory allocation. All with only using about 30MB of memory.
But What About WebAssembly?
You might be thinking, “Why not just use WebAssembly for everything?” In theory, that sounds great. WebAssembly supports many languages — but in practice, converting languages like C#, Go, or Python is still problematic due to the lack of garbage collection in the current spec. These languages have to bundle their own GC runtimes, bloating your app and undermining the performance gains.
Languages like C, C++, and Rust, however, fit perfectly within WebAssembly’s model. If I hit a case where I need extreme performance in a Lambda, I’ll write just that one in Rust to get the most out of it.
Why I Recommend TypeScript
The biggest reason I push TypeScript, especially for non-tech-focused companies, is that you can train almost anyone to use it. Give someone a good pattern to follow and they can just run with it. Copy. Paste. Go. I genuinely don’t understand the appeal of enterprise languages that require reading something like Design Patterns in C# just to understand what’s going on.
Despite 10 years of C experience, I wouldn’t go back. I do want to learn more Rust and add it to my toolkit, but for most of the problems I solve, TypeScript is the most sensible choice.
Let’s be honest: TypeScript and JavaScript have had the messy job of democratizing programming. They’ve gone through a slow evolution to support basic features. And sure, you’ll encounter a million junior developer blogs acting like gospel. But even with that noise, the language itself isn’t bad — and it’s only getting better.
What About You?
What do you use for writing serverless code? Does your choice of language limit your performance or scalability? Do you have any tricks I should know about?
177560709
submission
BitterEpic writes:
Traditional JavaScript runtimes like Node.js rely on garbage collection, which can introduce unpredictable pauses and slow down performance, especially during cold starts in serverless environments like AWS Lambda. LLRT's manual memory management, courtesy of Rust, eliminates this issue, leading to smoother, more predictable performance.
LLRT also has a runtime under 2MB, a huge reduction compared to the 100MB+ typically required by Node.js. This lightweight design means lower memory usage, better scalability, and reduced operational costs. Without the overhead of garbage collection, LLRT has faster cold start times and can initialize in milliseconds—perfect for latency-sensitive applications where every millisecond counts.
For JavaScript developers, LLRT offers the best of both worlds: rapid development with JavaScript’s flexibility, combined with Rust’s performance. This means faster, more scalable applications without the usual memory bloat and cold start issues.
Still in beta, LLRT promises to be a major step forward for serverless JavaScript applications. By combining Rust’s performance with JavaScript’s flexibility, it opens new possibilities for building high-performance, low-latency applications. If it continues to evolve, LLRT could become a core offering in AWS Lambda, potentially changing how we approach serverless JavaScript development.
Would you consider Javascript as the core of your future workflow? Or maybe you would prefer to go lower level with quckjs?