Thread

Zero-JS Hypermedia Browser

Relays: 5
Replies: 38
Generated: 03:02:22
Do. It. Yourself. Anon, really, it’s not that hard. We have the tools. nostr:note1cfqcewnxjwwv0zcduptdrdajng7dat37ap765x9yt9cmdjq726xq40dvcs
2025-10-20 16:12:42 from 1 relay(s) 2 replies ↓
Login to reply

Replies (38)

Proxmox has it's issues tho. Im sure they all do. I used to run a Windows cluster, it was great but a lot to manage. Proxmox has horrible network stability issues and STONITH really be killer with corosync. It's HA storage is a little lacking tho. You either get hyperconverged (which i guess is getting popular) or standalone ceph which is not cost effective.
2025-10-20 16:40:55 from 1 relay(s) ↑ Parent 3 replies ↓ Reply
I really liked a 2 node HA SAN setup with iSCSI and virtual IPs. Windows had SMB multichannel as a backhaul for HA storage quorum. SMB has fantastic performance with Windows using link aggregation, and load balancing so you can easily use a mix of 10g fiber and 1g. Linux cannot do this at all. LCAP blows in comparison. Iscsi multipath is better, but still. And I'm not sure if anyone has priced out 10gb switches lately...
2025-10-20 16:46:23 from 1 relay(s) ↑ Parent 1 replies ↓ Reply
I partially agree, but coming from Windows networking, linux networking sucks ass out of the box and the basic RTFM. Unless you learn to become a wizard (and I havent) having a balance of performance, HA, and hardware pricing, Windows just works out of the box. That and there is no reason for a hard crash when a single node loses quorum. I'm currently learning about STONITH, but that seems unreasonable when a link goes down due to a packet drop or a STP temporary lockout. Which means yeah, you SHOULD have physically redundant connections for a cluster network... Okay so another set of nics, and another switch, and another 40 ethernet patch cables... I already have 2 48 port switches almost at capacity.
2025-10-20 16:50:00 from 1 relay(s) ↑ Parent 2 replies ↓ Reply
We all have limited time and priorities. It will come, it's still important to explain this nuance to people. If I had been told what I know now I probably would have never switched to proxmox. People see what they want to see and leave out the massive blocking complexities that require specialization in the domain. We can't all specialize in everything.
2025-10-20 16:54:52 from 1 relay(s) ↑ Parent 1 replies ↓ Reply
I think it's one of those things you have to get right the first time, otherwise you're stuck in the hell of "i can't touch the network because the entire cluster will hard crash and take 25 minutes to come back online and God I hope no disks were corrupted" My UPSs are getting kind of old and sometimes brown-outs don't trip fast enough, I had an issue where a quick power loss tripped up my main switch and looking at the logs 3/5 nodes lost quorum and the whole cluster hard crashed and hardware reset. I lost 3 VMs in the process I had to restore from backup. Took almost 2 hours to recover at 3 am XD
2025-10-20 17:02:51 from 1 relay(s) ↑ Parent 2 replies ↓ Reply
That anecdote is a power issue though. This is where wholistic approach matters highly, one bad power system fucking higher layers is so damn frequent and it sucks when that’s what you’re stuck using. No power, no revenue.
2025-10-20 17:06:48 from 1 relay(s) ↑ Parent 2 replies ↓ Reply
The network outage was intermittent, and 2 nodes still had quorum (yes Im aware that's too low a vote), taking the ENTIRE cluster down over 5 seconds of network loss is absolutely nuts to me. The machines in the picture I shared had consumer UPSs, crappier network cards, configurations, and switches in comparison and still did better in terms of stability.
2025-10-20 17:09:16 from 1 relay(s) ↑ Parent 1 replies ↓ Reply