Replies (106)
Wish I knew what that stuff is, cuz I like the sense of power this gear gives me. Would prob be poor if i did, so maybe best I remain ignorant
If it didn't take so much effort to run _well_ I'd recommend everyone load up 1 kilowatt server racks to heat their homes in the winter.
But yes it's also a huge financial game lol. It took me about 12-15 years of learning hardware cycles and how to get used enterprise gear cheap and make the junk I have work.
Chip...turner... that's a pun, isn't it?
If you ever need a job at yolobase (my exchange, in case you're unaware), hit us up. The infra here is... well, a good analogy would be a scene from a submarine from an old war movie. Steady lads 🫡
Wait wtf? Was it always "Tuner"? Been reading it as "turner" all this time
XD Yeah it's chip tuner. Since 2022 :)
It's a callback to my old career as an automotive calibration firmware engineer aka tuner.
xD
TBC on auto firmware. I have questions! Another time persnaps
haha. it's quite funny, i'm not even a car freak but in the late 90s i remember the arrival of electronic fuel injection and services to "tune" the "chips"
I only learned a couple years ago that some people "tune" their cars (teens mostly, I assume, or that sort) by simply doing some sort of chip upgrade. No idea how any of that works, but I'm sure it's a deep rabbithole
how do you handle the non-static IP?
I'm aware of services to work around that issue, but I'm curious to learn how your system handles it
yeah, back then, chip mods were the thing. basically, replace the program, with a tweaked version that does whatever thing. i remember also around the same time there was a big move into tuned exhausts as well, with their kooky rumble... on a 1.6L engine lol
You can't self host! Your 200 users need lower latency in Malaysia! Besides it is probably against your ISP's Acceptable Use Policy!
I am actually super pleased that in switching to fiber my new AUP isn't as stupid about hosting.
Well I have static IPs and redundant networks, but that's because I run stuff commercially. But there are many ways around it, I ran with a dynamic IP for a long time. My IPS rarely released it, maybe once every few years so it wasn't a show stopper when I woke up to a brief outage because my IP changed.
- dynamic dns as a service exists,
- most dns services have APIs and there are many integrations with free/oss software you can run to update your dns
- you can purchase a private or shared server from a cloud provider (many of them for added redundancy) and use ssh or vpn tunnels
In my case, the service is commercial, but the location is residential, so I use cloud servers in a few datacenters to spread out the likelihood of failure. If you do an nslookup on www.vaughnnugent.com you'll see 2 IP addresses, one US east coast, one US west. In my experience, when a cloud server provider goes down, it rarely takes down their entire network. It's higher level services (like cloudflare) where the complexity is increased where outages like this often happen.
If you had any more specific questions id be happy to answer them.
I need to actually try hosting something. I am a bit worried that my firewall-fu will fall short of requirements against the IP pirates of the Internet.
Yeah thankfully the only issues I have are with email routing. I have to tunnel it because you know it's illegal in the US to host your own mail server without permission.
I've been with this ISP for a very long time and it's not a secret that I run these things. Back in the day I just had port 80 wide open and routed directly to the house, yolo.
Effectively illegal or actually illegal?
Redundant cloud servers routing traffic to home :) You can do a significant amount of filtering and load balancing at layer 4. Shared servers, given enough bandwidth, on modern VPS have more than enough left over resources to do deeper inspection and filtering.
I had to fill out a government form with Inmotion and Linode to allow port 25 traffic. I think it's actually a law that IPS, residential and commercial block port 25 traffic (source and dest) without permission. That form gets submitted to the FCC i believe, then I had to wait for approval. I also had to list the company (presumably your full name if using personally) and every domain I expected to send from, and why. My isp has no remediation for it, but I haven't asked in a while. I don't want to send from my home IP anway.
Can you tell your cloud provider that you refuse to pay for anything more than a 500Mbps connection to your VPN server? That way they accidentally protect you from DOSing your home connection.
You don't happen to live close enough to pop over to talk shop at some point? Probably not I am stuck in MN where the only other nostrite sells jerky.
Yup, that's it. In the 90s it was called chip tuning colloquially. Usually switching to eeproms over the stock proms and continually swapping rom chips out after modifying lookup data. The only difference now, is that fpgas and sram got fast and cheap enough we could socket active devices directly onto the bus instead. Same old 90s vehicles, new tech look.
Ama!
That is ridiculous! It is just email. I suppose if you were hosting for other people data security would be important since people's email is basically their online ID. (So stupid that)
I mean you can yeah, but more realistically, you'd use nginx, ha-proxy, or envoy etc and handle layer 4 traffic that way. So unless you had a really bad config I can't see that going wrong, at least by default. I just do layer 4 tunneling, no vpn. Although Im starting to fiddle with ssh tunneling for things that really need some sort of client initiated tunneling.
Stay hu--
Read this exchange
Stack sats
Someday I... may
Well, it's because until very recently, by default, mail servers were designed to relay email. So you could act as a client, connect to my mail server with a gmail.com address and my server was supposed to go: oh I don't have that address internally, but let me find that for you, ope, I found it, lets get that mail sent for ya there bud. That's how spam propagated and other malicious traffic. Actually I leaned much of this from a great ValueStack podcast with
@Jameson Lopp.
It was naively designed to handle high level routing and discovery, then was used maliciously and triggered an act of congress.
Many mail servers have deprecated over the past couple years but I've found new ones. Postfix and Dovecot are still around, but good luck getting those up and running.
yeah, i imagine that with time the engineering of optimal timings and response programs they are really efficient now. just the fact that diesel engines can tolerate intermittent operation now, that was not the case until the mid 00s. the number of times i sat at bus interchanges for 5 minutes with the engine idling, doesn't happen anymore, even the buses have better engine controls and all the other new systems they added.
you really should try wireguard.
In this use case, there is no reason to add another man in the middle. There is nothing to gain, only more complexity.
that proves you don't realise wireguard is a p2p protocol. its functionally similar to an SSH tunnel but over UDP
technically, it's a point to point protocol but it interfaces with the router, so you can use it to connect multiple systems together, like 3, you can set up 3 paths and each machine's IP stack can route between them if all three have one connection to each other.
No. You misunderstand the use case.
ok, i just heard something about you using ssh tunneling. literally wireguard is like easy ssh tunneling and instead of AES, uses noise protocol with chacha20/poly1305 csprng. you can tunnel naked non-TLS connections through it securely, and its a lower latency negotiation than standard TLS with AES.
Yes, that's a great use case for it. And yes, it's a great tool I use often.
But ssh tunnels have their use case. Specifically that wireguard, in it's implemented form, requires a server and client. Routes all layer 4 traffic by default, and must be configured. Ssh is pretty much already and always configured, and initiated client side with pre-installed client software on the big 3 operating systems. SSH lets the client initiate a remote or local tunnel and control the traffic type. Things you can kind of hack on wireguard with ip tables, is just built in to ssh.
SSH does also support chachapoly now btw. I require it for my sftp customers.
Both have a use, but when people call me up and want a quick way to handle the tunnels with higher level of security by default, ssh is a great choice. Can be done in a few seconds on the terminal with systemd.
Well yes. But it has been the common wisdom to not run an open relay for at least a few decades. Is it still the default if you don't configure Postfix correctly?
It is easy to play without getting too hardware rich.
I run tons of tiny services, just for my own home, from a tiny NUC. I did stick a 1TB nvme drive and 32 GB of ram in it. But using LXD containers I have things like my own git server, databases, management tools, game servers for the kids, etc.
The hardest part was trunking the one NIC so I can run containers on different clans. I still don't really understand what I did or what cloud-init is and why it is used by my distro.
But all you really need is a router, a managed switch, and some old computer that will run headless (the last is optional, you can get an HDMI dongle to fool an old laptop into thinking it has peripherals)
i think the coming crop of AI capable LPDDR5+ memory and AI accelerators in the CPUs and other coprocessors will do a pretty good job of warming up small offices for sure :)
Do you use port 22 or give them a custom port? I suppose it doesn't matter if configured correctly.
I have an inappropriate love of wireguard, if only because I used to use OpenVPN.
i run a wireguard server on mine and only listen on the wireguard port. eliminates all the logs of idiot scripts trying to hax the server.
also, i call bullshit on ssh tunneling being easy, it most certainly is not, and routing capabilities are even more arcane, i'm sure.
why invent a second language to express rules that could be done with iptables?
wireguard is great, the only deficiency i see is that complex network topologies are unweildy.
if you have 3 machines, each one run a wg node, and you can actually have them all coexist on the same subnet, and with a little iptables they can route with each other easily.
but it is cumbersome. you have to do fairly complete graph topologies for this
Thanks for writing, it's mostly over my head sadly. Maybe if you come out west you can explain over the course of a 8 hour lunch and I'll understand.
Seriously though, I plan to do some kind of home server thing soonish. Friend of mine stream his music from his server and I want that badly
I have resistance to understanding these things, I think because there's so much one has to just take in faith and there's no hope to understand details of a lot of pieces. Or maybe I'm slow

GitHub
GitHub - angristan/wireguard-install: WireGuard VPN installer for Linux servers
WireGuard VPN installer for Linux servers. Contribute to angristan/wireguard-install development by creating an account on GitHub.
for linux
the wireguard clients for windows and mac are full GUIs so that's pretty easy to set up. but you need to have a VPS running one that the servers are clients of, and then with a device connected to the server, you can tunnel to the gaming machine and watch its stream if it's listening on 0.0.0.0 or specifically on the wireguard client IP address
i use it also with my mobile, idk about ios but there is an android GUI client for wireguard too, so you can stream to a mobile as well.
i cannot recommend enough playing with wireguard for this kind of thing. self hosting relays also is another thing you can do. wss://test.orly.dev redirects to my pc and you can reach it when i'm running it for tests, solves the TLS/SSL certificate bullshit. which reminds me, yet another piece of useful software i wrote that github is denying public acccess to
Oh, if you just want to stream your own music. Get a NAS that has a Plex server app, then install Plex amp on your phone.
I do have wireguard installed on my router to get at my nas without doing a port forward to allow the NAS manufacturer access.
But I vaguely think wireguard is becoming more common even on consumer routers.
It is over my head too. But I try anyway. How far west are we talking? If it is as far west as the Dakotas I can't make it by lunchtime.
All the way west
@node this jive with your setup?
<table flip> say WireGuard again! Or NAS or NIC. I dare you
Thanks YODL. I tried Plex before but I prefer Jellyfin
WARRRRRRGARRRRRD!!!! ✊
well, it's in kernel 5, i think, so basically everything you can configure that has this you can use it. for everything else, there is a client/server that talks the protocol.
Sorry, I am confused and not very good at this social stuff. Which one do you want me to say again, Wireguard, NAS, or NIC? Once you make up your mind I'll come back and say it for you. But since you seem unsure maybe it is some other term you were thinking of possibly
802.11
AP
Frame Tagging (802.1Q VLAN)
DNS
DMZ
Ipv6
If you like any of those let me know. Real guys like ChipTuner have like two more layers deep of lingo so you'll have to ask him for the good stuff.
I want to give that one a try. Plex likes to break things at inconvenient times for corporate reasons.
Yes, and mleku is just as bad. You guys think you're sooo smart with your fancy lingo.
If it helps any I keep forgetting what an abelian group is. I also never grok'd eigen values or vectors. So go ahead and name drop a few terms from set theory.
But really lingo has nothing to do with intelligence, but it is the map of the territory for any discussion. The first and hardest step in getting into any new field is just learning the vocabulary. After that you can specialize as a BSer like me who talks a big game or an expert like ChipTuner who actually knows what the words mean and gets things done.
knowing words is not the same as understanding
GPT and friends know many things but they don't understand shit
i like to explain things to people and i don't consider it to be cool to speak over people's heads. analogies can usually build a bridge to the ideas
This is why one of my favorite uses for AI is to teach me the lingo. They make crap up but they do use the right words. It gives me a starting point for my own research.
LLMs aren't really intelligent. they can parse a bunch of things and make inferences out of a body of text, that is based on the memory encoded into the parameters but their actual brains comes from the network of parameters not the knowledge inside it.
hallucinations are when the information exceeds the brain capacity not the data set. it's kinda cool because actually, to some extent, the hallucinations can sometimes be creative, sometimes even almost correct. you can watch them recognise this when you point out things in some facts that they give you that are not mainstream theory but validate, against the model, and the LLM affirms it which is nice. but they are dumb, and when i say dumb, i mean under human 80 IQ. the big words are just read off a dictionary, more or less.
For the entire life of hmail it was (mail server for windows) until right before it went abandoned I believe. But no I don't think it is. It doesn't matter now though it's been over decade where nearly no one runs email servers anymore, and a huge majority of individuals who would be affected by spam have been hidden behind big tech for twice that long.
But we both know that laws are only additive.
I don't use a tunnel normally and no I don't use port 22 :)
You can simply configure an nginx stream to forward ingress on port 443 directly to home. This is how enterprise L4 load balancing is handled. Youll want to enable the HAPROXY protocol so you can use client IPS to do L7 rate limiting.
I already have a pair of P100s putting work in bumping that up to almost 1.5 kw XD
We aught to fix that. My proposal is that the entire body of law applicable to any jurisdiction "local/state/federal/international" must be compressible along with any dictionaries and the compression algorithm itself to say 100kB and be comprehensible to a highschool graduate.
So if you want to add a law you will have to find one to delete.
Now you are beyond me which would delight
@YODL but I'll have to bookmark this and come back to it once I have something I want to host.
AMA on this :) I've just recently bumped up that capacity and robustness. The current side project has been using linux, pacemaker, and podman to make a high availability application cluster. Similar to the result of kubernetes, without the mess of the kubernetes control plane.
Oh I've got many, many fancy terms in that realm. Been collecting some new ones lately too.
Here's a great joke that'll no doubt help you remember "abelian group":
What's purple and commutes?
An abelian grape!
With you on vocab, though in a lot of ways (especially math or maybe physics) knowing definitions is a big part of knowing anything; they are so precisely refined over time that they reflect the underlying well, or at least distill it. I dunno if that makes sense, but it's why I find all tech stuff infuriating to understand. Bunch of random terms, different flavors by manufacturer/developer, NOBODY understands any peace fully anymore, etc. gives me vertigo
Think it was you that compared them to a child mind but with HUGE RAM. I like that view
yeah, probably as dumb as a 4 year old, but it can read a million times faster and regurgitate sensible sounding expressions using them... even if some of them (about 37%) are invalid.
i barely even know what an arithmetic group is. i had a cute cryptographer girl explain it to me once in a presentation when i was working for a shitcoin building a zero knowledge smart contract engine. but i don't remember a thing about it, beyond that it's a set of numbers related to a specific set of operations, that lets you do cool stuff like ECDH
I kinda understood this. Have heard of docker clusters, and I know kubernetes is like docker.
@cloud fodder introduced me to the systemd container system, what was it called? i forget. main disadvantage compared to docker is you have to build the container in one go, no aufs/btrfs/overlayfs. i think his spamblaster still uses it. he directed me to this thing. what was it, nsomething, i forget
Do you have this ... cryptographers number? Asking for a friend
Groups are one of the more basic yet powerful math constructions/areas of study. Just 3 simple axioms and quite a bit follows. One cool thing that can help conceptualize them a bit, is it can be shown All finite groups can (say of order N) can be embedded in the set of permutations of N numbers (the set of functions that mix 1-1 a set of N objects, this has N! functions in it, and the group operation is composition of functions. So you can look at groups as set of permutations (that's where that "invertible" axiom is needed, permutations being 1-1 are invertible functions) that are closed under this operation. Neither here or there, but I think knowing this helps understand the motivation or interest behind them).
Abelian groups are those that are commutative, order doesn't matter, and all abelian groups are classified nicely as "direct products" of cyclic groups (eg Z3 x Z5 is an abelian group of order 15). Cyclic groups are just an even narrower set of groups that have just one multiple in these direct product things.
In elliptic curve setting, we construct a convoluted operation (the weird adding of points on a curve) and then show its a group. If it turns out this group has say prime order, it must be cyclic (easy enough to prove this), so we get that nice presentation using generators.
This is rough and a couple things I could smooth over but don't wanna
Yeah there is Docker Swarm, but most of the world is on rancher/k8s kubernetes for devops. That is mostly focused on moving the deployment lifecycle of an application to the development team. It would take a ALOT more work to bundle what I've done to make it useful outside of my environment. Right now it's an experiment because my tiny pea brain can't handle the kubernetes control plane in _my_ environment which is not "the cloud."
Yeah I believe it's just called systemd containers, at least thats the rhel package for it. I actually have to leverage the systemd container tools to manage the environments. And yeah that's one of many disadvantages. For devops I could see myself using it more, but right now onedev and scripts do that job very well.
Relatable af xD
Don't get me started on rancher/k8s!
(I'll stop posting my not-even-that-funny-and-of-little-value response now)
Well I'm glad I'm not the only one then. It sure feels isolating when the whole world is like -> kubs. That's it. Just install it bro. It's the best option bro. There's nothing better bro. You MUST learn kubes bro. The whole industry relies on kubes bro.
yeah, kubernetes is ok but i can throw together better and more flexible deployments by hand with not very much work, especially assisted by claude.
beyond that, i try to build my apps to not need containers, and when i get the time, release binaries too so it's literally just copy the binary, set up the service, enable --now and enjoy.
I try to do the same. All vnlib related packages are bundled so they can be run identically from the terminal or inside a container. Gotta have containers though. Deploying user-facing services in running "directly" on a production cluster is not an option imo. Containers remove my reliance on the developer to setup my environment. I can override file paths with mounts, control networking, map uids, dns overrides, scale up and down, migrate easily between machines, and update or roll back by changing the image hash in a single file. I can also force limitations (cgroup stuff) like cpu scheduling, memory limits and whatnot. Nasty things that applications (especially foss apps) usually do that can ruin a shared server.
yis, systemd-nspawn, still the goat. one main advantage i think is running long running containers, that have a full systemd running inside them (multiprocess). docker is horrible at multiprocess. and with that type of thing, you can have agents running inside that handle incremental updates and things are just easier this way vs trying to go 100% single process. having a single process is not a requirement for microservices and it eats into the uptime having to be a single proc wrangler.
haproxy is a good example, it has really, really good hot-reloading. and docker won't let it do that.
it's just cgroups, and volumes are bind mounts for your persistent data. fast. very.
yeah, shit programmers is the reason you need containers. i have had to reinstall linux many times in my life because of those shit programmers, when i accidentally clobber some system dependency deep in the stack while trying to build some piece of shit experimental app that does some neat thing.
my app is designed like it is a normal user app. if it is found to break other things in the system, it has to be fixed. a lot of server devs don't have that user-first policy like i do. everything must be simple enough that anyone with basic unix shell skills can use it.
i literally don't even try to run a lot of server software on my system without a container around them because i'm so allergic to blowing up my /lib directory once again.
Very very true, but how are you handling automation, packaging or IAC? Also when I say: containers != docker. Fuck docker specifically.
I've been using fuse.bindfs and podman to handle my complex filesystem needs. I don't need anything multi process at the moment. However the git server will for now.
It's more like opinionated programmers imo. And devs can't build things for every environment, my "cloud" is very different from say EC2.
I think the issue is that, 99% of the time we're not deploying infra from a terminal. Im doing it from my IDE then pushing to a git server which kicks off jobs to deploy apps. The terminal does me no good here.
Yes exactly. I run podman on my local machine for basically everything. Just open up a tty and leave it up.
opinionated programmers are shit programmers.
i know you think i'm opinionated for favoring Go, but i mainly favor it because it was the first language that didn't get in my way. with javascript and C/C++/rust there's always 50 more dependencies, and build times in the multiple aeons.
anyhow, you have deployed my relay in the past. i'm sure it isn't the worst or most painful task you ever did.
lol, and did i mention python? it might be the most irritating language to deploy apps from. when is the neverending migration to 3 going to complete? i think a whole childhood of years has passed since that shit started.
yeah, i highly recommend nspawn as a tool for containers without the docker, kubernetes or other cloud grief.
also, sorry, not sorry, but i'm not deploying a hello world to amazon. ever.
it's such an anti-bitcoin thing to do, if you ask me.
especially someone like
@npub1qdjn...fqm7 who could build a cloud from scratch, i mean, did, if i am understanding what he says correctly.
I am keeping it verrrrrry simple. I build images, on a cadence. But those images, have a life of their own once they are running because they have systemd scripts and golang agent processes, that manage things in an efficient way (like updating in place, or renewing certificates, and managing the whole proxy control plane). Persistent data is saved to a place that is outside the image so for 'manual intervention' it's always possible to terminate the running image, and launch the new one while keeping the data and configs. Software like haproxy, automatically manages app servers for example, so when they update they are removed from the pool, and then added back after they have updated.
even kube can't really do multiprocess.. it's kindof interesting. but yeah, for distributed things, i think performance matters. if not, then sure blow half your network stack on re-proxying in the most in-efficient way possible :)
in my ideal world, i dont compile, run or use any package management outside pacman, to do anything in user land that is anything to do with anything. even my own apps. its 1)dirty and 2)security
but ya in reality, some things i do, so i prob need moar machines 😂 but python, hell no, i just bind mount /dev/ttyUSB0 and flash that meshtastic from nspawn containers python. 😎
That was one of my issues with kubes as well, the service discovery which I hadn't planned out. Okay so now every app is behind a traefic proxy... So i'm just adding _another_ proxy. When instead a virtual IP can handle active-passive, and the L7 balancers can handle active-active.
> more specific questions
yeah, at this point, I think you answered my question pretty thoroughly and it's aligned with the answer I anticipated... you've got a commercial use, and you're piggybacking. makes total sense.
I don't have a commercial use yet, and so I haven't sprung for the static IP... and the various dynamic solutions are a bit more complex than I've taken any effort to address.
🫡
Actually I have a question
@cloud fodder. Im wondering if I could configure and deploy my pacemaker control plane as an nspawn container? Right now I'm still doing it script based which isn't ideal for maintaining cluster members.
Most of the system services need to be running under root though, not sure if that's possible. Id like to be able to migrate to an immutable or read-only OS image at some point.
nspawn containers are like docker without the builder or overlay filesystem. you would use your scripts to set them up.
Yeah I did some more reading and it's not going to be the solution for me. It might be for my upcoming storage cluster but not for the application clusters.
yeah. nspawn is for people who like to get their hands dirty with programming systems tooling, not for systems administrators who also happen to be C programmers.
idk what the differential in pay grade is but i'm firmly in the DIY group, i have an allergy to abstraction and complexity that standard devops/sysops work involves, i try to make using that stuff optional with my servers.
Exactly! However I think I'm trying to solve a provisioning problem the wrong way. Something about being able to just turn things on and off without mutation of the system is what I was interested in playing with.
I'm playing with ignition and Fedora CoreOS as we speak.
lol
you can also export docker images and import them to systemd-nspawn, so you can still use docker build tools or prebuilt images. i started off that way and sometimes do this on various things that someone has already published