nym's avatar
nym
nym@primal.net
npub1hn4z...htl5
nym's avatar
nym 1 year ago
I Quit! The Tsunami of Burnout Few See By now, we all know the name of the game is narrative control: we no longer face problems directly and attempt to solve them at their source, we play-act "solutions" that leave the actual problems unrecognized, undiagnosed and unaddressed, on the idea that if cover them up long enough they'll magically go away. The core narrative control is straightforward: 1) everything's great, and 2) if it's not great, it's going to be great. Whatever's broken is going to get fixed, AI is wunnerful, and so on. All of these narratives are what I call Happy Stories in the Village of Happy People, a make-believe staging of plucky entrepreneurs minting fortunes, new leadership, technology making our lives better in every way, nonstop binge-worthy entertainment, and look at me, I'm in a selfie-worthy mis en scene that looks natural but was carefully staged to make me look like a winner in the winner-take-most game we're all playing, whether we're aware of it or not. Meanwhile, off-stage in the real world, people are walking off their jobs: I quit! They're not giving notice, they're just quitting: not coming back from lunch, or resigning without notice. We collect statistics in the Village of Happy People, but not about real life. We collect stats on GDP "growth," the number of people with jobs, corporate profits, and so on. We don't bother collecting data on why people quit, or why people burn out, or what conditions eventually break them. Burnout isn't well-studied or understood. It didn't even have a name when I first burned out in the 1980s. It's an amorphous topic because it covers such a wide range of human conditions and experiences. It's a topic that's implicitly avoided in the Village of Happy People, where the narrative control Happy Story is: it's your problem, not the system's problem, and here's a bunch of psycho-babble "weird tricks" to keep yourself glued together as the unrelenting pressure erodes your resilience until there's none left. Prisoners of war learn many valuable lessons about the human condition. One is that everyone has a breaking point, everyone cracks. There are no god-like humans; everyone breaks at some point. This process isn't within our control; we can't will ourselves not to crack. We can try, but it's beyond our control. This process isn't predictable. The Strong Leader everyone reckons is unbreakable might crack first, and the milquetoast ordinary person might last the longest. Those who haven't burned out / been broken have no way to understand the experience. They want to help, and suggest listening to soothing music, or taking a vacation to "recharge." They can't understand that to the person in the final stages of burnout, music is a distraction, and they have no more energy for a vacation than they have for work. Even planning a vacation is beyond their grasp, much less grinding through travel. They're too drained to enjoy anything that's proposed as "rejuvenating." We're trained to tell ourselves we can do it, that sustained super-human effort is within everyone's reach, "just do it." This is the core cheerleader narrative of the Village of Happy People: we can all overcome any obstacle if we just try harder. That the end-game of trying harder is collapse is taboo. But we're game until we too collapse. We're mystified by our insomnia, our sudden outbursts, our lapses of focus, and as the circle tightens we jettison whatever we no longer have the energy to sustain, which ironically is everything that sustained us. We reserve whatever dregs of energy we have for work, and since work isn't sustaining us in any way other than financial, the circle tightens until there's no energy left for anything. So we quit, not because we want to per se, but because continuing is no longer an option, and quitting is a last-ditch effort at self-preservation. Thanks to the Happy Stories endlessly repeated in the Village of Happy People, we can't believe what's happening to us. We think, this can't be happening to me, I'm resourceful, a problem-solver, a go-getter, I have will power, so why am I banging my head against a wall in frustration? Why can't I find the energy to have friends over? All these experiences are viewed through the lens of the mental health industry which is blind to the systemic nature of stress and pressure, and so the "fixes" are medications to tamp down what's diagnosed not as burnout but as depression or anxiety, in other words, the symptoms, not the cause. And so we wonder what's happening to us, as the experience is novel and nobody else seems to be experiencing it. Nobody seems willing to tell the truth, that it's all play-acting: that employers "really care about our employees, you're family," when the reality is we're all interchangeable cogs in the machine that focuses solely on keeping us glued together to do the work. Why people crack and quit is largely unexplored territory. In my everyday life, three people I don't know quit suddenly. I know about it because their leaving left their workplaces in turmoil, as there are no ready replacements. One person was working two jobs to afford to live in an expensive locale, and the long commute and long hours of her main job became too much. So the other tech is burning out trying to cover her customer base. In another case, rude / unpleasant customers might have been the last straw, along with a host of other issues. In the moment, the final trigger could be any number of things, but the real issue is the total weight of stress generated by multiple, reinforcing sources of internal and external pressure. There's a widespread belief that people will take whatever jobs are available when the economy slumps into recession. This presumes people are still able to work. Consider this chart of disability. Few seem interested in exploring this dramatic increase. If anyone mentions it, it's attributed to the pandemic. But is that the sole causal factor? ![](https://m.stacker.news/72538) We're experiencing stagflation, and it may well just be getting started. If history is any guide, costs can continue to rise for quite some time as the purchasing power of wages erodes and asset bubbles deflate. As noted in a previous post, depending on financial fentanyl to keep everything glued together is risky, because we can't tell if the dose is fatal until it's too late. ![](https://m.stacker.news/72539) A significant percentage of the data presented in my posts tells a story that is taboo in the Village of Happy People: everyday life is much harder now, and getting harder. Life was much easier, less overwhelming, more stable and more prosperous in decades past. Wages went farther--a lot farther. I have documented this in dozens of posts. My Social Security wage records go back 54 years, to 1970, the summer in high school I picked pineapple for Dole. Being a data hound, I laboriously entered the inflation rate as calculated by the Bureau of Labor Statistics (which many see as grossly understating actual inflation) to state each year's earnings in current dollars. Of my top eight annual earnings, two were from the 1970s, two were from the 1980s, three from the 1990s and only one in the 21st century. Please note that the nominal value of my labor has increased with time / inflation; what we're measuring here is the purchasing power / value of my wages over time. That the purchasing power of my wages in the 1970s as an apprentice carpenter exceeded almost all the rest of my decades of labor should ring alarm bells. But this too is taboo in the Village of Happy People: of course life is better now because "progress is unstoppable." But is it "progress" if our wages have lost value for 45 years? If precarity on multiple levels is now the norm? If the burdens of shadow work are pushing us over the tipping point? This is systemic, it's not unique to me. Everyone working in the 70s earned more when measured in purchasing power rather than nominal dollars, and the prosperity of the 80s and 90s was widespread. In the 21st century, not so much: it's a winner-take-most scramble that most of us lose, while the winners get to pull the levers of the narrative control machinery to gush how everything's great, and it's going to get better. I've burned out twice, once in my early 30s and again in my mid-60s. Overwork, insane commutes (2,400 miles each way), caregiving for an elderly parent, the 7-days-a-week pressures of running a complex business which leaks into one's home life despite every effort to silo it, and so on. I wrote a book about my experiences, Burnout, Reckoning and Renewal, in the hopes that it might help others simply knowing others were sharing their experiences. What's taboo is to say that the source is the system we inhabit, not our personal inability to manifest god-like powers. The system works fine for the winners who twirl the dials on the narrative control machinery, and they're appalled when they suffer some mild inconvenience when the peasantry doing all the work for them break down and quit. A tsunami of burnout and quitting, both quiet and loud, is on the horizon, but it's taboo to recognize it or mention it. That the system is broken because it breaks us is the taboo that is frantically enforced at all levels of narrative control. That's the problem with deploying play-acting as "solutions:" play-acting doesn't actually fix the problems at the source, it simply lets the problems run to failure. The dishes at the banquet of consequences are being served cold because the staff quit: as Johnny Paycheck put it, Take This Job And Shove It. The peasants don't control the narrative control machinery, and so we ask: cui bono, to whose benefit is the machinery working? The New Nobility, perhaps? originally posted at
nym's avatar
nym 1 year ago
Listing all mounts in all mount namespaces A little while ago we added a new API for retrieving information about mounts. ## `listmount(2)` and `statmount(2)` To make it easier to interact with mounts, the `listmount(2)` and `statmount(2)` system calls were introduced in Linux `v6.9`. They both allow interacting with mounts through the new `64 bit` mount ID (unsigned) that is assigned to each mount on the system. The new mount ID isn’t recycled and is unique for the lifetime of the system, whereas the old mount ID was recycled frequently and maxed out at `INT_MAX`. To differentiate the new and old mount ID, the new mount ID starts at `INT_MAX + 1`. Both `statmount(2)` and `listmount(2)` take a `struct mnt_id_req` as their first argument: ```c /* * Structure for passing mount ID and miscellaneous parameters to statmount(2) * and listmount(2). * * For statmount(2) @param represents the request mask. * For listmount(2) @param represents the last listed mount ID (or zero). */ struct mnt_id_req { __u32 size; __u32 spare; __u64 mnt_id; __u64 param; __u64 mnt_ns_id; }; ``` The struct is versioned by size and thus extensible. ### `statmount(2)` `statmount()` allows detailed information about a mount to be retrieved. The mount to retrieve information about can be specified in `mnt_id_req->mnt_id`. The information to be retrieved must be specified in `mnt_id_req->param`. ```c struct statmount { __u32 size; /* Total size, including strings */ __u32 mnt_opts; /* [str] Options (comma separated, escaped) */ __u64 mask; /* What results were written */ __u32 sb_dev_major; /* Device ID */ __u32 sb_dev_minor; __u64 sb_magic; /* ..._SUPER_MAGIC */ __u32 sb_flags; /* SB_{RDONLY,SYNCHRONOUS,DIRSYNC,LAZYTIME} */ __u32 fs_type; /* [str] Filesystem type */ __u64 mnt_id; /* Unique ID of mount */ __u64 mnt_parent_id; /* Unique ID of parent (for root == mnt_id) */ __u32 mnt_id_old; /* Reused IDs used in proc/.../mountinfo */ __u32 mnt_parent_id_old; __u64 mnt_attr; /* MOUNT_ATTR_... */ __u64 mnt_propagation; /* MS_{SHARED,SLAVE,PRIVATE,UNBINDABLE} */ __u64 mnt_peer_group; /* ID of shared peer group */ __u64 mnt_master; /* Mount receives propagation from this ID */ __u64 propagate_from; /* Propagation from in current namespace */ __u32 mnt_root; /* [str] Root of mount relative to root of fs */ __u32 mnt_point; /* [str] Mountpoint relative to current root */ __u64 mnt_ns_id; /* ID of the mount namespace */ __u32 fs_subtype; /* [str] Subtype of fs_type (if any) */ __u32 sb_source; /* [str] Source string of the mount */ __u32 opt_num; /* Number of fs options */ __u32 opt_array; /* [str] Array of nul terminated fs options */ __u32 opt_sec_num; /* Number of security options */ __u32 opt_sec_array; /* [str] Array of nul terminated security options */ __u64 __spare2[46]; char str[]; /* Variable size part containing strings */ }; ``` ### `listmount(2)` `listmount(2)` allows the (recursive) retrieval of the list of child mounts of the provided mount. The mount whose children are to be listed is specified in `mnt_id_req->mnt_id`. For convenience, it can be set to `LSTMT_ROOT` to start listing mounts from the rootfs mount. A nice feature of `listmount(2)` is its ability to iterate through all mounts in a mount namespace. For example, a buffer for `100` mount IDs is passed to `listmount(2)`, but the mount namespace contains more than `100` mounts. `listmount(2)` will retrieve `100` mounts. Afterwards, the `mnt_id_req->param` can be set to the last mount ID returned in the previous request. `listmount(2)` will return the next mount after the last mount. `listmount(2)` also allows iterating through subtrees. This is as simple as setting `mnt_id_req->mnt_id` to the mount whose children are to be retrieved. By default, `listmount(2)` returns earlier mounts before later mounts. This can be changed by passing `LISTMOUNT_REVERSE` to `listmount(2)`. `LISTMOUNT_REVERSE` will cause it to list later mounts before earlier mounts. ## Listing mounts in other mount namespaces Both `listmount(2)` and `statmount(2)` by default operate on mounts in the caller’s mount namespace. But both support operating on another mount namespace. Either the unique `64-bit` mount namespace ID can be specified in `mnt_id_req->mnt_ns_id` or a mount namespace file descriptor can be set in `mnt_id_req->spare`. In order to list mounts in another mount namespace, the caller must have `CAP_SYS_ADMIN` in the owning user namespace of the mount namespace. ### Listing mount namespaces The mount namespace ID can be retrieved via the new `NS_MNT_GET_INFO` nsfs `ioctl(2)`. It takes a `struct mnt_ns_info` and fills it in: ```c struct mnt_ns_info { __u32 size; __u32 nr_mounts; __u64 mnt_ns_id; }; ``` The mount namespace ID will be returned in `mnt_ns_info->mnt_ns_id`. Additionally, it will also return the number of mounts in the mount namespace in `mnt_ns_info->nr_mounts`. This can be used to size the buffer for `listmount(2)`. This is accompanied by two other `nsfs` ioctls. `ioctl(fd_mntns, NS_MNT_GET_NEXT)` returns the mount namespace after `@fd_mntns`, and `ioctl(fd_mntns, NS_MNT_GET_PREV)` returns the mount namespace before `@fd_mntns`. These two ioctls allow iterating through all mount namespaces in a backward or forward manner. Both also optionally take a `struct mnt_ns_info` argument to retrieve information about the mount namespace. All three ioctls are available in Linux `v6.12`. ## Conclusion Taken together, these pieces allow a suitably privileged process to iterate through all mounts in all mount namespaces. Here is a (dirty) sample program to illustrate how this can be done. Note that the program below assumes that the caller is in the initial mount and user namespace. When listing mount namespaces, a mount namespace will only be listed if the caller has `CAP_SYS_ADMIN` in the owning user namespace; otherwise, it will be skipped. ```c // SPDX-License-Identifier: GPL-2.0-or-later // Copyright (c) 2024 Christian Brauner <brauner@kernel.org> #define _GNU_SOURCE #include <errno.h> #include <limits.h> #include <linux/types.h> #include <stdio.h> #include <sys/ioctl.h> #include <sys/syscall.h> #define die_errno(format, ...) \ do { \ fprintf(stderr, "%m | %s: %d: %s: " format "\n", __FILE__, \ __LINE__, __func__, ##__VA_ARGS__); \ exit(EXIT_FAILURE); \ } while (0) /* Get the id for a mount namespace */ #define NS_GET_MNTNS_ID _IO(0xb7, 0x5) /* Get next mount namespace. */ struct mnt_ns_info { __u32 size; __u32 nr_mounts; __u64 mnt_ns_id; }; #define MNT_NS_INFO_SIZE_VER0 16 /* size of first published struct */ /* Get information about namespace. */ #define NS_MNT_GET_INFO _IOR(0xb7, 10, struct mnt_ns_info) /* Get next namespace. */ #define NS_MNT_GET_NEXT _IOR(0xb7, 11, struct mnt_ns_info) /* Get previous namespace. */ #define NS_MNT_GET_PREV _IOR(0xb7, 12, struct mnt_ns_info) #define PIDFD_GET_MNT_NAMESPACE _IO(0xFF, 3) #ifndef __NR_listmount #define __NR_listmount 458 #endif #ifndef __NR_statmount #define __NR_statmount 457 #endif /* @mask bits for statmount(2) */ #define STATMOUNT_SB_BASIC 0x00000001U /* Want/got sb_... */ #define STATMOUNT_MNT_BASIC 0x00000002U /* Want/got mnt_... */ #define STATMOUNT_PROPAGATE_FROM 0x00000004U /* Want/got propagate_from */ #define STATMOUNT_MNT_ROOT 0x00000008U /* Want/got mnt_root */ #define STATMOUNT_MNT_POINT 0x00000010U /* Want/got mnt_point */ #define STATMOUNT_FS_TYPE 0x00000020U /* Want/got fs_type */ #define STATMOUNT_MNT_NS_ID 0x00000040U /* Want/got mnt_ns_id */ #define STATMOUNT_MNT_OPTS 0x00000080U /* Want/got mnt_opts */ #define STATX_MNT_ID_UNIQUE 0x00004000U /* Want/got extended stx_mount_id */ struct statmount { __u32 size; __u32 mnt_opts; __u64 mask; __u32 sb_dev_major; __u32 sb_dev_minor; __u64 sb_magic; __u32 sb_flags; __u32 fs_type; __u64 mnt_id; __u64 mnt_parent_id; __u32 mnt_id_old; __u32 mnt_parent_id_old; __u64 mnt_attr; __u64 mnt_propagation; __u64 mnt_peer_group; __u64 mnt_master; __u64 propagate_from; __u32 mnt_root; __u32 mnt_point; __u64 mnt_ns_id; __u64 __spare2[49]; char str[]; }; struct mnt_id_req { __u32 size; __u32 spare; __u64 mnt_id; __u64 param; __u64 mnt_ns_id; }; #define MNT_ID_REQ_SIZE_VER1 32 /* sizeof second published struct */ #define LSMT_ROOT 0xffffffffffffffff /* root mount */ static int __statmount(__u64 mnt_id, __u64 mnt_ns_id, __u64 mask, struct statmount *stmnt, size_t bufsize, unsigned int flags) { struct mnt_id_req req = { .size = MNT_ID_REQ_SIZE_VER1, .mnt_id = mnt_id, .param = mask, .mnt_ns_id = mnt_ns_id, }; return syscall(__NR_statmount, &req, stmnt, bufsize, flags); } static struct statmount *sys_statmount(__u64 mnt_id, __u64 mnt_ns_id, __u64 mask, unsigned int flags) { size_t bufsize = 1 << 15; struct statmount *stmnt = NULL, *tmp = NULL; int ret; for (;;) { tmp = realloc(stmnt, bufsize); if (!tmp) goto out; stmnt = tmp; ret = __statmount(mnt_id, mnt_ns_id, mask, stmnt, bufsize, flags); if (!ret) return stmnt; if (errno != EOVERFLOW) goto out; bufsize <<= 1; if (bufsize >= UINT_MAX / 2) goto out; } out: free(stmnt); return NULL; } static ssize_t sys_listmount(__u64 mnt_id, __u64 last_mnt_id, __u64 mnt_ns_id, __u64 list[], size_t num, unsigned int flags) { struct mnt_id_req req = { .size = MNT_ID_REQ_SIZE_VER1, .mnt_id = mnt_id, .param = last_mnt_id, .mnt_ns_id = mnt_ns_id, }; return syscall(__NR_listmount, &req, list, num, flags); } int main(int argc, char *argv[]) { #define LISTMNT_BUFFER 10 __u64 list[LISTMNT_BUFFER], last_mnt_id = 0; int ret, pidfd, fd_mntns; struct mnt_ns_info info = {}; pidfd = sys_pidfd_open(getpid(), 0); if (pidfd < 0) die_errno("pidfd_open failed"); fd_mntns = ioctl(pidfd, PIDFD_GET_MNT_NAMESPACE, 0); if (fd_mntns < 0) die_errno("ioctl(PIDFD_GET_MNT_NAMESPACE) failed"); ret = ioctl(fd_mntns, NS_MNT_GET_INFO, &info); if (ret < 0) die_errno("ioctl(NS_GET_MNTNS_ID) failed"); printf("Listing %u mounts for mount namespace %llu\n", info.nr_mounts, info.mnt_ns_id); for (;;) { ssize_t nr_mounts; next: nr_mounts = sys_listmount(LSMT_ROOT, last_mnt_id, info.mnt_ns_id, list, LISTMNT_BUFFER, 0); if (nr_mounts <= 0) { int fd_mntns_next; printf("Finished listing %u mounts for mount namespace %llu\n\n", info.nr_mounts, info.mnt_ns_id); fd_mntns_next = ioctl(fd_mntns, NS_MNT_GET_NEXT, &info); if (fd_mntns_next < 0) { if (errno == ENOENT) { printf("Finished listing all mount namespaces\n"); exit(0); } die_errno("ioctl(NS_MNT_GET_NEXT) failed"); } close(fd_mntns); fd_mntns = fd_mntns_next; last_mnt_id = 0; printf("Listing %u mounts for mount namespace %llu\n", info.nr_mounts, info.mnt_ns_id); goto next; } for (size_t cur = 0; cur < nr_mounts; cur++) { struct statmount *stmnt; last_mnt_id = list[cur]; stmnt = sys_statmount(last_mnt_id, info.mnt_ns_id, STATMOUNT_SB_BASIC | STATMOUNT_MNT_BASIC | STATMOUNT_MNT_ROOT | STATMOUNT_MNT_POINT | STATMOUNT_MNT_NS_ID | STATMOUNT_MNT_OPTS | STATMOUNT_FS_TYPE, 0); if (!stmnt) { printf("Failed to statmount(%llu) in mount namespace(%llu)\n", last_mnt_id, info.mnt_ns_id); continue; } printf("mnt_id:\t\t%llu\nmnt_parent_id:\t%llu\nfs_type:\t%s\nmnt_root:\t%s\nmnt_point:\t%s\nmnt_opts:\t%s\n\n", stmnt->mnt_id, stmnt->mnt_parent_id, stmnt->str + stmnt->fs_type, stmnt->str + stmnt->mnt_root, stmnt->str + stmnt->mnt_point, stmnt->str + stmnt->mnt_opts); free(stmnt); } } exit(0); } ``` originally posted at
nym's avatar
nym 1 year ago
Python HTML components https://about.fastht.ml/components Why FastHTML embeds HTML generation inside Python code. The idea of embedding an HTML generator inside a programming language is not new. It is a particularly popular approach in functional languages, and includes libraries like: Elm-html (Elm), hiccl (Common Lisp), hiccup (Clojure), Falco.Markup (F#), Lucid (Haskell), and dream-html (OCaml). But the idea has now gone far beyond the functional programming world— JSX, an embedded HTML generator for React, is one of the most popular approaches for creating web apps today. However, most Python programmers are probably more familiar with template-based approaches, such as Jinja2 or Mako. Templates were originally created for web development in the 1990s, back when web design required complex browser-specific HTML. By using templates, designers were able to work in a familiar language, and programmers could “fill in the blanks” with the data they needed. Today this is not needed, since we can create simple semantic HTML, and use CSS to style it. Templates have a number of disadvantages, for instance: - They require a separate language to write the templates, which is an additional learning curve - Template languages are generally less concise and powerful than Python - Refactoring a template into sub-components is harder than refactoring Python code - Templates generally require separate files - Templates generally do not support the Python debugger. By using Python as the HTML-generation language, we can avoid these disadvantages. More importantly, we can create a rich ecosystem of tools and frameworks available as pip-installable Python modules, which can be used to build web applications. ## How FastHTML’s underlying component data structure is called `FT` (“FastTag”). To learn how this works in detail, see the [Explaining FT Components](https://docs.fastht.ml/explains/explaining_xt_components.html) page. `FT` objects can be created with functions with the Capitalized name of each HTML tag, such as `Div`, `P`, and `Img`. The functions generally take positional and keyword arguments: - Positional arguments represent a list of children, which can be strings (in which case they are text nodes), FT child components, or other Python objects (which are stringified). - Keyword arguments represent a dictionary of attributes, which can be used to set the properties of the HTML tag. - Keyword arguments starting with `hx_` are used for HTMX attributes. Some functions, such as `File`, have special syntax for their arguments. For instance, `File` takes a single filename argument, and creates a DOM subtree representing the contents of the file. Any FastHTML handler can return a tree of `FT` components, or a tuple of FT component trees, which will be rendered as HTML partials and sent to the client for processing by HTMX. If a user goes directly to a URL rather than using HTMX, the server will automatically return a full HTML page with the partials embedded in the body. Much of the time you’ll probably be using pre-written FastHTML components that package up HTML, CSS, and JS. Often, these will in turn hand off much of the work to some general web framework; for instance, the site you’re reading now uses Bootstrap (and the `fh-bootstrap` FastHTML wrapper). At first, moving from HTML to FT components can seem odd, but it soon becomes natural – as Audrey Roy Greenfeld, a hugely experienced Python web programmer author, and educator, told us: > _“In my head I had resistance and initial scepticism to converting all my HTML to FT. When I realised that working with the tags in Python is like the elegance of working in the frequency domain after Fourier transform vs. working with time series data in the time domain, I finally gave in, let go, started enjoying the FT tags. The first few times I thought the approach of conversion and then copy-pasting was crazy. It was only when I started to understand how to organise the tags into components that it suddenly felt elegant and templates felt crazy.”_ One good approach to creating components is to find things you like on the web and convert them to FastHTML. There’s a simple trick to doing this: 1. Right-click on the part of a web page that you want to use in your app, and choose ‘Inspect.’ 2. In the elements window that pops up, right-click on the element you want, choose ‘Copy,’ and then ‘Outer HTML.’ 3. Now you’ve got HTML in your clipboard, you can automatically convert it to FastHTML: go to [h2f.answer.ai](https://h2x.answer.ai/), paste the HTML into the text area at the top, then the FastHTML code will appear at the bottom. Click the Copy icon at the top right of that code and then paste it into your Python app. BTW, the h2f app mentioned above is written in around a dozen lines of code! You can see the [source code here](https://github.com/AnswerDotAI/fasthtml-example/blob/main/h2f/main.py). ## The Future We want your help! FastHTML is very new, so the ecosystem at this stage is still small. We hope to see FastHTML Python versions of style libraries like Bootstrap, DaisyUI, and Shoelace, as well as versions of all the most popular JavaScript libraries. If you are a Python developer, we would love your help in creating these libraries! If you do create something for FastHTML users, let us know, so we can link to your work (or if you think it would be a useful part of the FastHTML library itself, or one of our extension libraries, feel free to send us a pull request). We would also like to see Python modules that hook into FastHTML’s and Starlette’s extensibility points, such as for authentication, database access, deployment, multi-host support, and so forth. Thanks to Python’s flexibility and the power of ASGI, it should be possible for a single FastHTML server to replace a whole stack of separate web servers, proxies, and other components. originally posted at
nym's avatar
nym 1 year ago
Portals and Quake Ever wanted to know how exactly did Quake’s precomputed visibility work? I did, so I wrote vis.py, a reimplementation of their algorithm in Python. This guide has all the information you need to understand vis, the tool used by Quake, Half-Life and Source Engine games. During the development of Quake, overdraw became a concern. It means the same pixel getting written many times during the rendering of a frame. Only the last color stays visible and the earlier writes go to waste. This is bad if your game is software rendered and already pushing the mid 90’s PCs to their limits. ![](https://m.stacker.news/72409) How to reduce overdraw? Let’s begin with a very high-level overview of the solution landscape. **Portal culling helps with overdraw** In 3D games, it’s a good idea to reduce the number of drawn objects. Frustum culling is one fundamental method for this, in which objects confirmed to be outside the virtual camera’s view are skipped during rendering. This can be done for example with object bounding boxes or bounding spheres. Frustum culling still leaves some performance on the table. Many objects may still be within the field of view of the camera even if they don’t contribute any pixels to the final image. This is not a performance catastrophe if everything is rendered from front to back. GPU’s early-z testing will help here. Still, in large worlds it would be faster to never submit these objects for rendering in the first place. Occlusion culling is a process where you discard objects that you deem to lie behind other objects in the scene. Its purpose is to discard as many occluded objects as possible. It’s not strictly needed, since you’ll get the correct image thanks to the z-buffer anyway. There are a few ways to do this such as the hierarchical z-buffer, occlusion queries, portal culling, and potentially visible sets (PVS). In this article I talk about the last two: portals and the PVS. In portal culling, the world is divided into spaces where the virtual camera can move around and the openings between them. The spaces are called cells, viewcells, zones, clusters or sectors, and the openings portals. This is a useful split especially in architectural models with cleanly separated rooms connected by doorways or windows. It also works for mostly-indoor video game levels :) ![](https://m.stacker.news/72412) Portal rendering starts from the camera’s cell. The game renders everything inside that cell, and then recursively looks into portals leading away from that first cell to find out what else to draw. It renders all objects in every cell and then examines the cell’s portals. If a portal doesn’t line up with another one on screen, it won’t be visited. Each successive portal shrinks the visible screen area smaller and smaller until the whole portal is clipped away. A straightforward way to test portals for visibility is to intersect their screenspace bounding boxes. Those are shown in white in the picture below. If two bounding boxes overlap, we can see through the respective portals. More accurate tests can be performed with 3D clipping or per-pixel operations. ![](https://m.stacker.news/72413) The Quake engine uses portals but only during map preparation time. At runtime, the portals are nowhere to be seen. This technique is a variant of Seth Teller’s PVS method presented in his 1992 dissertation that only worked with axis-aligned walls. **Portals of a Quake map disappear** Often portals are placed by hand by a level designer. Quake’s bsp map compilation tool places portals automatically, which is nice, but unfortunately it creates a lot of them! ![](https://m.stacker.news/72414) You see, in Quake the cells are very small. But no portals are tested at runtime. Instead, each cell gets a precomputed list of other cells that can been seen from it. This is the Potentially Visible Set (PVS) for that cell. In Quake, a cell is a small convex volume of space, so a single room will usually get split into multiple cells. These cells correspond to leaves of a binary space partitioning (BSP) tree. The BSP tree was used to divide the map into cells and portals. For us, the exact method is irrelevant though. But BSP does make it easy to find the cell the camera is in at runtime. Since we have now entered the Quake territory in our discussion, I’ll start calling a cell a leaf. Leaf is the term used in all source code, level editors, error messages, and other resources on Quake. The meaning stays exactly the same though, it’s just a convex cell connected to other cells via portals. This is how leaves look in our example level: ![](https://m.stacker.news/72415) The portals appear in between leaves, as expected: ![](https://m.stacker.news/72416) Nothing would’ve stopped them from grouping multiple leaves to form larger cells with fewer portals in between. In fact, this is exactly what they did for Quake 2 with its “clusters” of leaves. With larger clusters of leaves, you do get more overdraw. Also, a cluster made of convex leaves may not be convex itself any more. But even in that case you can still act as if it still is, and assume the portals inside can be seen from anywhere in the cluster. It’s less accurate but works. **High-level overview of vis** The Quake map tool vis takes in portals generated by another tool, bsp, precomputes a leaf-to-leaf visibility matrix, and writes the matrix back to the compiled map file. This article series describes how vis functions. We know that leaves can see each other only through portals. So we don’t even need to know how exactly the leaves look like, only how they are connected together. At its most basic level, vis does two recursive depth-first traversals, followed by a quick resolve pass before writing the visibility results back to a compiled map file. Three steps: - Base visibility. Estimate a coarse leaf-to-portal visibility. - Full visibility. Refine the coarse results via portal clipping. - Resolve. Combine the refined portal-to-leaf results to the final leaf-to-leaf visibility. For a quick visual overview, I can recommend Matthew Earl’s great video on Quake’s PVS. Portals have a direction In a portal system, the cells and portals are structured as a cell-and-portal graph. Quake’s map tooling follows this pattern and connects leaves with portals, even though this structure isn’t present at runtime. Leafs are connected by portals: ![](https://m.stacker.news/72417) Since portals are interfaces between convex leaves, the polygons are also convex. In 3D, a portal looks like this: ![](https://m.stacker.news/72418) Conceptually, each portal is a two way opening. You can see through it in both directions. However, it’s convenient to make the portals directed. This way we can keep track on what’s visible in different directions. We give each portal a normal vector, the direction the portal can be seen through. Now a single input portal becomes two directed portals: ![](https://m.stacker.news/72419) Therefore the graph will now have directed edges instead: ![](https://m.stacker.news/72420) Note that a leaf stores only indices of portals leading away from that leaf. The graph is stored in two global arrays called portals and leaves with objects of the respective types. Since the graph is accessed both via indices and direct object references, I came up with the following naming convention: - pi is the index of a portal, Pi is the actual object Pi = portals[pi], and - li is the index of a leaf, Li is the actual object Li = leaves[li]. Our goal is to compute which nodes can reach each other in this graph while honoring the 3D visibility relations between portals associated with each edge. But what on earth are those “visibility relations”? originally posted at
nym's avatar
nym 1 year ago
Crypto Wallet Makers Metamask, Phantom May Be Liable for Lost User Funds A Hail Mary filing by an appointee of Joe Biden’s outgoing presidential administration seeks to hold crypto walletdevelopers liable for any fraud or erroneous transactions impacting users—but the move is almost certain to be quashed once Donald Trump takes office later this month. The Consumer Financial Protection Bureau today announced a new proposed interpretive rule that would grant it the authority to regulate digital asset wallets as financial institutions offering electronic funds transfers. Doing so would allow the Bureau to hold wallet providers like MetaMask and Phantom responsible for fraudulent or erroneous, “unauthorized” transactions. The agency, which was created to protect consumers in the wake of the 2008 financial crisis, says it is legally permitted to make these adjustments, but is opening the proposed rule to two months of public comment as a courtesy. “When people pay for their family expenses using new forms of digital payments, they must be confident that their transactions are not tainted by harmful surveillance or errors,” the Bureau’s director, Rohhit Chopra, said today in a statement. The response to the proposed rule by crypto policy leaders was swift and critical. “Hacked because you… believed that fashion model in Malaysia needed 5,000 bucks to fly to see you? Don’t worry your wallet might have to cover it,” Bill Hughes, senior counsel at MetaMask creator Consensys, quipped sarcastically in a post to X on Friday. (Disclosure: Consensys is one of 22 investors in Decrypt.) “This is like holding a hammer manufacturer (who in many cases gives hammers away for free) liable for the misuse of a hammer,” Joey Krug, a partner at Peter Thiel's tech-focused venture firm Founders Fund, posted in response. Many in crypto saw the move, if galling, as unsurprising—given the deep connections between the Consumer Financial Protection Bureau and Elizabeth Warren, perhaps the industry’s most hated villain. Warren herself proposed the creation of the Bureau back in 2007, while still a professor at Harvard. Rohit Chopra, the agency’s current director, is a longtime Warren ally who was nominated to the position by Joe Biden in 2020. If crypto leaders are frustrated about Friday’s proposed rule, though, they don’t seem overly concerned about its potential harm. In 2020, the U.S. Supreme Court ruled that the president can dismiss the Bureau’s director without cause. Given the incoming Trump Administration’s intensely pro-crypto positioning—and Republicans' long-simmering anger at the mere existence of the Consumer Financial Protection Bureau—it appears likely that Chopra, and his efforts to rein in crypto wallet providers, are living on borrowed time. originally posted at
nym's avatar
nym 1 year ago
Solving NIST Password Complexities: Guidance From a GRC Perspective Not another password change! Isn’t one (1) extra-long password enough? As a former Incident Response, Identity and Access Control, and Education and Awareness guru, I can attest that password security and complexity requirement discussions occur frequently during National Institute of Standards and Technology (NIST) assessments. Access Control is typically a top finding in most organizations, with the newest misconception being, “NIST just told us we don’t have to change our passwords as often and we don’t need to use MFA or special characters!” This is almost as scary as telling people to put their Post-it notes under the keyboard so they’re not in plain sight. In an article Titled, "NIST-proposes-barring-some-of-the-most-nonsensical-password-rules", it was stated that NIST’s “. . . document is nearly impossible to read all the way through and just as hard to understand fully.” This is leading some in the IT field to reconsider or even change password policies, complexities, and access control guidelines without understanding the full NIST methodology. This blog post will provide an understanding of the context and complexities of the NIST password guidance in addition to helping better guide organizations in safe password implementation guidance and awareness. No one wants to fall victim to unintended security malpractice when it comes to access control. **Understanding the NIST Password Guidance in Context** The buzz around the NIST password guidance is frustrating because everyone seems to zoom right down to the section with the password rules and ignore the rest of the guidelines. The password rules are part of a much larger set of digital identity guidelines, and adopting the password rules without considering their context is counterproductive and potentially dangerous. The Scope and Applicability section of the new NIST guidelines, formally known as NIST Special Publication 800-63 Digital Identity Guidelines (NIST SP 800-63), states “These guidelines primarily focus on organizational services that interact with external users, such as residents accessing public benefits or private-sector partners accessing collaboration spaces.” In plain English: the guidance in NIST SP 800-63 is not intended for internal users’ accounts or sensitive internal systems, and organizations implementing the password rules on their internal systems are misusing the guidance. For organizations that are planning to use this guidance to secure their external-facing service accounts, NIST SP 800-63 spends 26 pages defining a risk-based process for selecting and tailoring appropriate IALs, AALs, and FALs, respectively, for systems, with three (3) assurance levels defined in each of those categories (see NIST SP 800-63 Section 3). It goes on to provide guidance on user identification, authentication, and federation controls appropriate for each assurance level in three (3) additional documents—NIST SP 800-63A, B, and C, respectively. The new password guidance is meant to support the AALs (defined in NIST SP 800-63B). Only AAL1, the lowest of the AALs defined in the guidelines, allows passwords to be used alone and still states that multi-factor options should be available. AAL1 is defined as providing “basic confidence that the claimant controls an authenticator bound to the subscriber account.” Organizations that adjust their rules for passwords to match the NIST guidelines without performing the risk-based analysis and selecting an appropriate AAL are naively implementing what NIST intended only for the most basic protection. This is an inappropriate use of this guidance document, as many systems will present significantly more risk to the organization than AAL1 was designed to address and would be more appropriately protected by AAL2 or AAL3 controls. In short, the NIST SP 800-63 password guidance (when used properly with a risk analysis) is intended and appropriate for external user accounts on public-facing services, e.g., customer accounts on a public portal. However, organizations should think twice before applying it to their own internal systems and users, because that was not its intended purpose. It’s also worth pointing out that, as of this post, the guidance that is making so many headlines is a draft and is subject to change before finalization. **Using the NIST Guidance** The two (2) most frequently asked password questions as auditors or GRC consultants we get at TrustedSec are, “Do we really need Multifactor Authentication (MFA) everywhere?” and “What is the best practice for the implementation of passwords?” For any organization that logs into a network, starting with a framework is a must for successful governance and cybersecurity foundations. Additionally, organizations must adhere to password and access guidelines based on the legal and regulatory requirements they must follow to keep their businesses running. Some examples are various NIST security control frameworks (e.g., CSF or SP 800-171), PCI-DSS, HIPAA, NERC-CIP, ISO 27001, SOX, etc. Many of these frameworks include specific requirements for utilizing complex passwords, rotation of passwords or passphrases, enabling MFA, determining access levels, and performing access reviews appropriate for the types of information and/or systems these frameworks are designed to protect. This NIST SP 800-63 guidance does not in any way override or supersede any of these more specific requirements, so organizations should continue meeting existing framework requirements. You might be asking, “How does this apply to my organization?” Most organizations question if their situation is applicable, due to the term “online service.” In today’s society, when users think of an “online service,” most would think shopping portals or online goods. However, as NIST defines it in the Glossary of SP 800-63, an online service is “A service that is accessed remotely via a network, typically the internet.” This is important to clarify that any organization that has an Internet-facing network is using an online service and can adhere to the digital guidelines and implement best practices for security, based on their risk profile or digital risk in the absence of other compliance requirements. What is digital risk? NIST describes Digital Identity Risk as management flow to help perform the risk assessment or risk impact. As seen with the new release of NIST CSF 2.0, Governance and Risk Assessments are the core to a healthy cybersecurity program. Organizations can begin to perform the risk (impact) assessment, as defined in SP 800-63 Section 3, by defining the scope of the service that they are trying to protect, identifying the risks to the system, and understanding what categories and potential harms the organization possesses. Identifying the baseline controls to use in the risk formula will assist with selecting the appropriate level. Identity Assurance or proofing is about a person proving who they say they are. A useful example: I sign up for a web service and enter an email address and new password—how does the service know I actually control that email? If they're smart, they will send a confirmation email to that address before setting up the account to prove my identity. This gets more complex when a user ID needs to be associated with a specific named individual, e.g., when retrieving medical information from a portal. Authentication is the process of confirming a user’s identity prior to allowing them access to a system or network, such as through a password. Federation is a process that allows access based on identity and authentication across a set of networked systems. Each level will have a severity rating, e.g., Low, Moderate, or High. Starting with the user groups and entities, thinking of people and assets, and then determining a category or categories will help identify harms or risks. Some examples are listed in SP 800-63. Next is evaluating the impact that improper access would have on an organization. This assists in identifying the impact level. Impacts such as reputation, unauthorized access to information, financial loss/liability, or even loss of life or danger to human safety can be included to help determine the impact level for each user group and organizational entity. Using the impact level leads to determining the IAL and then the AAL. When determining the AAL, the intent is to mitigate risk from authentication failures such as tokens, passwords, etc., whereas IAL is aligned with identity proofing and individuals. Certain factors, such as privacy, fraud, or regulatory items, may require a specific AAL. NIST references these in SP 800-63A, SP 800-63B, and SP 800-63C. Once the AAL is established, referencing SP 800-63C can be helpful in the next step for selection of the FAL. This assists in identifying the identity federation, which uses the Low, Moderate, and High impact criteria. Once IAL, AAL, and FAL have been established for each user group or entity, an Initial AAL can be selected. Of course, this will also need to take into consideration baseline controls, compensating controls, or supplemental controls, if needed. Also keep in mind, as with all NIST processes, that continuous evaluation and improvement are critical to staying secure, making this a recurring process. ![](https://m.stacker.news/72372) Now that the calculated AAL has been established, the chart in Section 3.5 of SP 800-63b will help with understanding some of the requirements. ![](https://m.stacker.news/72373) Most systems would be a Level 2 or 3, considering what the organization would be storing, processing, or transmitting. Things like sensitive data such as personally identifiable information (PII) during onboarding, payment card data, and PHI subject to HIPAA for places like hospitals or service providers might help align with each AAL. Instances where Level 1 might be utilized could be a website that would not store payment data but requires a user to log in with a user ID and password. Understand that risk for the website is minimal, and therefore a Level 1 for that specific system may be deemed appropriate. But the organization may have corporate network controls set to Level 2, due to HR or doing business with certain service providers. It’s perfectly fine to have various levels assigned to different groups and assets or entities. A similar process to define the IAL and FAL as noted in NIST’s Digital Identity Guidelines (NIST SP 800-63-3) is depicted below: **IAL:** ![](https://m.stacker.news/72374) **FAL:** ![](https://m.stacker.news/72375) So, after all of this, is one (1) super-long password really enough? It depends on the AAL and what other legal and regulatory requirements an organization is expected to adhere to. The highest level should always be implemented where possible, if the risk is present in the organization. For example, if the organization processes credit card data, PCI-DSS standards would prevail, meaning that passwords pertaining to the CDE must follow all PCI guidance and likely would be considered a Level 3 after performing the risk assessment. The key to the AALs, really, is determining the most sensitive data our organization has and aligning it with that level. Now, let’s put the AAL to use. I always consider Incident Response and Education and Awareness as my lead examples behind why we do security. If an organization becomes compromised, are all passwords and access controls properly aligned with the risk, and do they adhere to all legal and regulatory controls? Lastly, always implement MFA where possible; after all, NIST does strongly suggest it SHOULD be available, even at AAL 1! Be proactive on how to report an incident if your password is suspected to be compromised, change it immediately—even if NIST has a different idea—and communicate it to all users. All users and organizations should understand the risk to their specific organization and be compliant. Security is everyone’s responsibility—don’t let your organization’s password hygiene be the cause of the next big breach. originally posted at
nym's avatar
nym 1 year ago
The Homa network protocol # The Origins and Development of Homa Transport Protocol The origins of the TCP and UDP network protocols can be traced back a full 50 years. Even though networks and their use have changed radically since those protocols were designed, they can still be found behind most networking applications. Unsurprisingly, these protocols are not optimal for all situations, so there is ongoing interest in the development of alternatives. One such is the Homa transport protocol, developed by John Ousterhout (of Tcl/Tk and Raft fame, among other accomplishments), which is aimed at data-center applications. Ousterhout is currently trying to get a minimal Homa implementation into the kernel. Most networking applications are still based on TCP, which was designed for efficient and reliable transport of streams of data across a distributed Internet. Data-center applications, instead, are often dominated by large numbers of small messages between many locally connected hosts. The requirements of TCP, including the establishment of connections and ordering of data, add a lot of overhead to that kind of application. The design of Homa is intended to remove that overhead while taking advantage of what current data-center networking hardware can do, with a focus on minimizing the latency between a request and its response. ## A Quick Homa Overview At its core, Homa is designed for remote procedure call (RPC) applications; every interaction on a Homa network comes down to a request and associated reply. A client will send a request message to a server that includes a unique request ID; the server will send a reply back that quotes that ID. The only state that exists on the server is held between the receipt of the request and the receipt of the response by the client. Much of the key to the performance of this protocol can be found in how these messages are handled. There is no connection setup; instead, the client starts transmitting the request, with no introductory handshake, to the server. There is a limit on how many bytes of this "unscheduled" request data can be sent in this manner, which is determined by the round-trip time of the network; it should be just high enough to keep the request-transmission pipeline full until an initial response can be received from the server side. The figure of about 10,000 bytes appears in some of the Homa papers. The initial request packet includes the length of the full request. If the request does not fit into the size allowed for the unscheduled data, the client will wait for a "grant" response before sending any more. That grant should, if the server is responding quickly, arrive just as the initial request data has finished transmitting, allowing the client to continue sending without a pause. Grants include a maximum amount of data that can be sent and thus function like the TCP receive window. This machinery is intended to get a request to the server as quickly as possible, but without the need for much, if any, buffering in the network path between the two machines. Priority queues are used to manage this traffic, with unscheduled packets normally having the highest priority. Lower priorities are used for granted traffic; the requests with the least amount of data remaining to be received are given the highest priority. Once the server has received the full request and processed it, a response is sent back to the client. Once again, the initial bytes are sent as unscheduled packets, with grants required for the rest if the response is large enough. In the earlier descriptions of the protocol, the server would forget everything it knew about the request immediately after sending the response. That created the possibility that requests could be resent (if the response never arrives) and executed multiple times. More recent publications include an explicit acknowledgment message indicating that a response has been received, with the sender retaining the necessary state to retransmit a reply until that acknowledgment is received. The details of the protocol are, of course, rather more complex than described here. There are, for example, mechanisms for clamping down on the amount of unscheduled data sent if a server is finding itself overloaded. The receiving side of a message can request retransmission if an expected packet does not arrive; unlike TCP and many other protocols, Homa puts the responsibility for detecting lost packets onto the receiving side. There is also a fair amount of thought that has gone into letting systems overcommit their resources by issuing more grants than they can immediately handle; the purpose here is to keep the pipelines full even if some senders do not transmit as quickly as expected. See this [paper](#) for a more complete (and surely more correct) description of the Homa protocol, this [page](#), which reflects some more recent changes, and this [2022 article](#) for more details. ## Homa on Linux The Unix socket interface was designed around streams and is not a perfect fit for Homa, but the implementation sticks with it to the extent it can. A `socket()` call is used to create a socket for communication with any number of other systems; the `IPPROTO_HOMA` protocol type is used. Homa can run over either IPv4 or IPv6. For server systems, a `bind()` call can be used to set up a well-known port to receive requests; clients need not bind to a port. Messages are sent and received, as one might expect, with `sendmsg()` and `recvmsg()`, but there are some Homa-specific aspects that developers must be aware of. When sending a message, an application must include a pointer to this structure in the `msg_control` field of the `msghdr` structure passed to `sendmsg()`: ```c struct homa_sendmsg_args { uint64_t id; uint64_t completion_cookie; }; ``` If a request is being sent, `id` should be set to zero; the protocol implementation will then assign a unique ID to the request (and write it into `id`) before sending it to the server. For a reply message, `id` should be the ID value that arrived with the request being responded to. The `completion_cookie` value, which is only used for requests, will be passed back to the caller with the reply data when it is received. The receive side is a bit more complicated because Homa requires that the buffer space for replies be registered before sending the first request on a socket. To do so, the process should allocate a range of memory, then pass it into the kernel with `SO_HOMA_RCVBUF` `setsockopt()` operation, using this structure: ```c struct homa_rcvbuf_args { void *start; size_t length; }; ``` The start address must be page-aligned. This memory is split into individual buffers, called "bpages," each of which is `HOMA_BPAGE_SIZE` in length; that size is 64KB in the current implementation. Each message will occupy at least one bpage; large messages will be scattered across multiple, not necessarily contiguous, bpages. A message is received by making a call to `recvmsg()` with a pointer to this structure passed in the `msg_control` field of `struct msghdr`: ```c struct homa_recvmsg_args { uint64_t id; uint64_t completion_cookie; uint32_t flags; uint32_t num_bpages; uint32_t bpage_offsets[HOMA_MAX_BPAGES]; }; ``` The `flags` field describes what the caller is willing to receive; it is a bitmask that can include either or both of `HOMA_RECVMSG_REQUEST` (to receive request messages) and `HOMA_RECVMSG_RESPONSE` (to receive responses). If `id` is zero, then `HOMA_RECVMSG_RESPONSE` will cause any response message to be returned; otherwise, only a response corresponding to the provided request ID will be returned. On return, `num_bpages` will indicate the number of bpages in the registered buffer area used to hold the returned message; `bpage_offsets` gives the offset of each one. The bpages returned by this call are owned by the application at this point and will not be used by the kernel until they have been explicitly returned. That is done with a subsequent `recvmsg()` call, where `num_bpages` and `bpage_offsets` will indicate a set of bpages to be given back. This code has been "stripped down to the bare minimum" to be able to actually transmit requests and responses across the net; it is evidently about half of the full set of Homa patches. The intent, of course, is to ease the task of reviewing the work and getting initial support into the kernel; the rest of the work can come later. In its current form, according to the cover letter, its performance "is not very interesting," but that is expected to improve once the rest of the work is merged. See this [paper](#) for more information on the Linux implementation of Homa. ## Prospects The Homa protocol originates at Stanford University, with support from a number of technology companies. Academic work often does not successfully make the transition from interesting prototype into production-quality code that can be accepted into Linux. In this case, though, Ousterhout seems determined to get the code into the mainline and is trying to do the right things to get it there. Thus far, the four postings of the code have yielded some conversations about the protocol but have not yet resulted in a detailed review of the code. That suggests that the initial merge of Homa is not imminent. It does seem likely to happen at some point, though. Then, it will be a matter of whether the operators of large data centers decide that it is worth using. Complicating that question is Ousterhout's assertion (in the above-linked paper) that, even in a kernel with less overhead than Linux, CPUs simply are not fast enough to keep up with the increases in networking speed. The real future for Homa, he suggests, may be inside the networking hardware itself. In that case, the merging into Linux would be an important proof of concept that accelerates further development of the protocol, but its use in real-world deployments might be limited. It does, in any case, show how Linux is firmly at the center of protocol development for modern networks. originally posted at
nym's avatar
nym 1 year ago
A Pixel Parable This is his second lucky break of the weekend. A friend recommended he go to this sci-fi convention; she said it would be good for networking. Mark wouldn’t mind having one of his illustrations on a trading card, or a rule book, or, who knows, one of those fantasy novels he used to read in high school. So he shows up to the convention on Friday. Someone notices his work—his pencil landscapes that look hand-painted—, they invite him to join in the art show, he does, two days later he wins first place. That’s his first lucky break. The second is this guy, Gary, coming over to him, praising his work, asking him to ‘audition’ for a video game job. “Could you come over to the Ranch for an interview?” Gary insists. Mark reads the card again and stops for a second to think what to say next. He needs a job, after all. “I’d be happy to come over to do anything at all there but… I don’t know the first thing about video games. I never even touched a computer.” “That’s alright,” Gary replies, “we’ve had better luck teaching artists how to program than the other way around. I’m not worried about that part.” Driving back home, Mark tries to make sense of what just happened. On his first weekend—on his first serious attempt at becoming a professional illustrator—he’s offered to interview for a role that he didn’t even know existed, but that now sounds like a dream job—one that he’s terribly unqualified for. Later that night, he calls his parents and learns his father has just bought a personal computer. That’s his third lucky break. “What are you doing with this thing, anyway?” Mark is sitting in front of the computer, skimming the manual. The cover reads Atari 520ST. “The school made a deal for us to get them at half price. They say we should get computer-literate if we want to have a job in five years.” The voice comes through the doorway, then his father, leaning on the frame. “I figured I could use it for writing, but they have a different brand at school, so I can’t print my files there. And I’m not buying a printer, so I don’t see the point. You can have it if you want.” “Let’s see if I can get the job, first.” Mark keeps on reading. “It says there should be a drawing program for this. NeoChrome. Let’s try it out.” It takes them about 20 minutes to find the disk and open the program—their first project since his father taught him how to change his oil—then Mark switches to the NeoChrome manual. Another half hour later, he’s dropping little green dots over a blue background. His hand feels like a claw as he holds that little mouse. Whenever a connection sparks between the image on the screen and the image in his brain, he jerks to grab a pencil, a phantom limb of his. This machine won’t let him forget his body. For a few evenings, Mark secludes himself in his old high school bedroom, getting familiar with the computer and its painting program. He puts together a little African hut picture and teaches himself to reproduce it, over and over, from a blank page. He repeats it one last time at Skywalker Ranch, a few days later, to survive his interview. ![](https://m.stacker.news/72197) Mark walks out of the garage and meets Gary on the porch. Gary shows him around the Stable House, introducing everyone in the Games Group, and takes him to his desk. He’ll be sitting next to Gary and across the hall from Ron and David, the programmers. Mark notices there are not one but two different computers on his desk. And neither looks like the Atari he knows. “That’s a Commodore 64,” points Gary, “and that’s a DOS PC. We’ve been transitioning from C64 to DOS. In fact, your first job will be porting the backgrounds of our new game. You’ll notice these are, uh, a bit… clunky, when it comes to graphics. Especially that one,” he points at the PC. Mark feels like throwing up. “I know, it’s a lot,” Gary laughs. “Look, the only thing you need to know about this one is how to run the game. As for this one… You’ll mostly just be using DPaint.” Nobody around here really knows what they are doing, Gary reassures him, not even the programmers. They are all just trying to figure out what it means to tell a story with a computer. What a video game worthy of LucasFilm would be. There are no experts at that. “An artist’s perspective is what we expect you to bring to the table. The tools, you’ll figure out. They change all the time, anyway.” They spend the next hour fiddling with Deluxe Paint II, the drawing program for the PC. It’s like NeoChrome but better, or so Gary says. Mark notes the colors are fewer and uglier than the ones he saw on the Atari. “Oh, yeah. Those are the colors of your nightmares, starting tonight.” After lunch, Ron sits with Mark to play Maniac Mansion, his B-horror adventure game. He shows Mark how to move around, how to talk with other characters, how to solve puzzles to make progress. Maniac Mansion is a blueprint of the kind of work they are trying to do. There’s a new game they’ve been working on, David’s game, Zak MacKracken, but Ron says Maniac Mansion is better for getting started. It’s best if Mark spends a couple of days getting familiar with it. His impostor syndrome kicks in again; he’s no gamer, not even an arcade player. “That’s perfect,” Ron says. “We want to build something that just about anyone can pick up and have fun with.” Mark leaves the office with sore eyes from the computer screen and a headache from all the names and images shoved into his brain. He’s relieved that no one’s around to see him pull his Honda out of the underground garage. He slows down as he drives by the Main House, where they had lunch that day, a new building made to look old—period-specific old. Just like the one they put in the game. He circles by an artificial lake, a barn, a vineyard. This little valley is as otherworldly as any of his fantasy landscapes. As a shot from Star Wars. ![](https://m.stacker.news/72198) His first assignment is to port Zak MacKracken's Commodore64 backgrounds to the EGA PC. David hands him a description of each location in the game. They call them rooms even though some are outdoors—outer space, even. Each one consists of a short description and a list of “hotspots”, the things the player can interact with: objects, doors, that kind of thing. He has to make sure those remain visible on the new backgrounds. Other than the list of rooms, the only design document is a huge chart posted on a wall, a sort of storyboard for programmers. Mark can’t make sense of it—or the game, for that matter. Zak MacKracken is bigger and more ambitious than Maniac Mansion; the work seems more interesting but the game is undecipherable to Mark. At first, he tries working from the original C64 bit pictures, but that complicates things. Both are 16-color systems, but not the same 16 colors, so swapping palettes is pixel Whac-A-Mole. He needs to reproduce them from scratch. He sketches in his notebook, plots a grid in graph paper, and tapes acetate sheets to his monitor—anything to delay the moment when he has to move to the computer, where nothing flows, all so clumsy and rigid and LEGO-like. Then there’s the palette: black, dark gray, light gray, white, dark blue, light blue, cyan, yellow, mustard brown, dark red, poppy red, peach, magenta, acid-hot pink, grass green, and acid-chartreuse. Always the same suffocating 16 colors for anything he needs to draw. He has to ponder carefully what colors to “spend”, an early decision that constraints the rest of his choices: the scene composition, the mood, what’s shown, what’s hinted. There’s no room for impulse or experimentation, everything has to follow a plan. Despite his Digital Artist title, his job doesn’t seem much concerned with art. The only creativity is in subverting the tools, working around them, against them, exploiting their limitations. ![](https://m.stacker.news/72200) “Coppola,” says David. “Coppola, of course,” Gary concurs. “The Rolling Stones.” “Wait, all of them?” “Hmm. Mick Jagger. And the drummer, I guess.” “I missed them. I did see Huey Lewis.” “Yup. We played softball with the band.” It’s Mark’s third week and, for the first time, he catches a glimpse of George Lucas. They usually only see him at the restaurant when he has visits. Gary and David are listing all the famous people they saw at lunch. Today it’s Spielberg. “You’ll understand, of course,” David turns to Mark, “that, while it may seem as if they were right there across the room, we are not breathing the same air. We’re worlds apart.” “Galaxies,” Gary suggests. “Galaxies apart, thank you. They are holograms, like that Leia message on the first one. We can see them but they don’t see us.” “Under no circumstances should we be noticed by Lucas.” “Or one of his guests.” “Or any film-related people.” “And especially not Lucas.” The owner doesn’t care for video games. The existence of the games division is a sort of corporate accident, a spin-off of the Graphics Group prompted by a frustrated collaboration with Atari. And the fact that they got to stay while the Graphics Group—now called Pixar—was sold to Steve Jobs, is another corporate accident. They’re a rounding error, the last hackers standing, the only division totally unrelated to filmmaking—A kind of intruder. So the idea is to make themselves invisible, not to remind George Lucas that they exist, that he still owns this little video game studio, that they are spending his money and, much worse, taking up his precious space. “Our man Steve, on the other hand, is our biggest fan,” David points his fork to Spielberg. “You’ll be seeing a lot of him.” “This is like an amusement park to him. He’s more into it than Lucas, I think.” “He’d probably live here if he wasn’t busy, you know, churning out blockbusters.” “Did you know he used to call Ron for Maniac Mansion hints?” “So yeah, I bet he’ll get involved in one of the games sooner than later.” “An Indy game, most likely.” “When the tech is good enough.” “And when they get back the license.” “Right, when we get the license.” That part Mark already knows, that he learned in his first week: LucasFilm Games doesn’t have the rights to make LucasFilm games. No Indiana Jones, no Star Wars. Some toy company holds the license. Instead, they are expected to come up with original ideas, something that is both a blessing and a curse: they have creative freedom but they must live up to the Lucas name: “Stay small, be the best, don’t lose any money,” Gary proclaims. “And don’t embarrass George.” ![](https://m.stacker.news/72201) The mouse, the pixels, the 16-color palette, the hotspots: those are the constraints he has to work with. One trick he discovered early on—a hack, programmers would say—is that, when he arranges the pixels in a checkerboard pattern, they will bleed and blend as he zooms them out on the screen. Much like the eyes finish the job as one steps back from an impressionist painting, the monitor melts the pixel mosaic into something richer than what that dull EGA palette could ever project. At first, this is just an accidental observation, he doesn’t make much of it. It’s only when he starts working on a new batch of Zak backgrounds that he finds himself thinking about those mixed pixels again. This section of the game takes place on Mars, a location Mark finds very provocative. The acid EGA palette seems strangely fitting there. He owes no loyalty to the muddy C64 backgrounds and he need not abide by reality, either: he’s safely into sci-fi territory. He realizes he can weaponize the pixel-blending artifact and turn this into one of his fantasy landscapes. Drawing from Red Rock and Grand Canyon photos, he easily settles on a composition: a fiery desert, a rocky horizon, and a slightly displaced pale sun. It’s the palette that gives him the most work, hours of trial-and-error. He needs the right color combinations and the right density of interleaved pixels for each figure, each boundary. He wants the image to jump out of the screen; he wants the sky and the sun and the ground to bleed into each other distinctly—the sun to set the sky on fire and the earth to bed the ashes. It’s not the original C64 background, the EGA palette, or the hotspots list that dictates his work. It’s not what he pictured in his head. It’s the braid: each pixel born out of its predecessor, each one birthing the next. Little squares boiling with possibility, with no purpose but to carry his intent. For once, he doesn’t feel constrained by his material. He’s so free that the work becomes free in turn. He tamed it into rebelling and becoming something other than what he set out to produce, something better than what he could have imagined. It’s then, when the work speaks for itself, that he knows. This may not be art, not yet, but it’s better than anything he did and anything he’s seen on a computer screen. There’s the spark. This is the direction, that’s where he needs to go. ![](https://m.stacker.news/72202) Mark walks towards the door, then turns. “I can’t leave yet, I haven’t finished packing.” He looks at his desk. “I should put all this stuff in the box.” He picks up a pile of sketchbooks. “They are labeled by month and year.” He puts the pile of sketchbooks in the box. He picks up a worn-out DPaint 2 manual. “There’s a picture of an Egyptian mask on the cover.” He puts the worn-out DPaint 2 manual in the box. He picks up a set of colored pencils. “I hand-picked these myself, one for each of the 16 EGA colors. I guess I won’t be needing them anymore.” He puts the set of colored pencils in the trash bin. He picks up a Sam & Max issue. “My favorite.” He puts the Sam & Max issue in the box. He picks up an Indiana Jones action figure. “Indy.” He puts the Indiana Jones action figure in the box. He picks up a Chewbacca action figure. “Chewie.” He puts the Chewbacca action figure in the box. He picks up a Sleeping Beauty reference book. “I never bothered returning this to the library.” He puts the Sleeping Beauty reference book in the box. He picks up a signed Loom box. “It’s signed by The Professor. I signed another copy for him.” He puts the signed Loom box in the box. He picks up the box. “This box is too full, I can’t carry it like this.” He puts the box back on the desk. He walks towards the door, then turns. “I can’t leave yet, I haven’t finished packing.” He looks at the desk. “Neat.” He looks at the desk drawer. “Neat.” He opens the desk drawer. He looks at the open desk drawer. “There’s a piece of rope here.” He picks up the piece of rope. “This might come in handy.” He looks at the open desk drawer. “It’s empty.” He uses the piece of rope on the box. “Much better.” He picks up the box. He walks out. ![](https://m.stacker.news/72203) The Honda Civic drives out of the underground garage and turns around the Stable House. Lake Ewok glows like a dithered mirror. The car passes by the barn and the corral then drives away from the security kiosk and onto the main road. A tall tree goes by, followed by two short ones. Then two short trees go by, followed by a tall one. Then two short trees go by, followed by a tall one. Then there are no more trees, just hills and grass and road. The hills smooth down into a plain, Californian unlikely, and the flat darker blue sky grows naked in turn. The Honda proceeds and the road proceeds but then ends abruptly, like an abandoned flooring job. The car rides on over generic green grass for a while, approaching an edge, moving out of the picture. But not all of it. Halfway out, it freezes. I can still make out the trunk and the glass, and the corner of a tire, sitting there, stationary. originally posted at
nym's avatar
nym 1 year ago
Not every user owns an iPhone As software engineers and technologists its common to have access to some powerful devices and super fast bandwidths. It’s highly likely that you will be developing/testing on a high end Mac (or similar) or pulling out an expensive mobile device such as an iPhone from your pocket. But we need to be careful that this doesn’t lull us into a false sense of reality. We need to take care that we don’t end up sitting in ivory towers thinking performance of our applications is rosy, when in the wild our users are facing a different reality. I’m going to utilise Rum Archive’s fantastic dataset to hopefully paint a picture of why this is important. The following measurements were narrowed to beacons collected from the United Kingdom. Important to highlight that the Rum Archive data is sampled. I will lean on Core Web Vitals a little, but because of the comparison I want to highlight I will also use metrics that are collected from Safari users. Throughout we’ll look at the 75th percentile of users for each metric and we’ll zero in on mobile devices. **First Contentful Paint (FCP)** Let’s start at the beginning of a users experience. The point at which the user can see a change on screen. “Is it happening”*. Contrasting the FCP seen from iOS and Android users. ![](https://m.stacker.news/72179) The 75th percentile for Android users is 400ms slower than that of iOS users, a significant 34% slower. **Time To Interactive (TTI)** Next to look at is Time To Interactive. mPulse describes this as > “The time when the page looks ready to use and is responsive to user input.” I’ve chosen to consider this metric as INP is unavailable for Safari users and I’m seeking a comparative measure of the time it takes for a page to become interactive, i.e. the point at which a page could reliably respond to user interaction such as a click. This metric could provide a good understanding of how our split of users are handling the resources that are delivered as part of the page, i.e. the JavaScript! We see a significant performance gap contrasting iOS and Android users. ![](https://m.stacker.news/72180) The 75th percentile for Android users is 66% percent slower than the iOS equivalent, more than 2 seconds slower! Alex Russell’s Performance equality gap highlights that the Android user base typically will be on lower powered devices compared to those with an iPhone. Devices with less hardware capability, more prone to being overwhelmed by heavy JavaScript execution. **Interaction to Next Paint** So what does the impact of low end devices look like on user interaction? At this point it would be really great if Safari could support Core Web Vitals and in particular INP, but it’s not quite Christmas yet! Instead lets take a closer look at INP measurements for Android users by device model, to see how device capability can impact INP. To give a sense of device capability I’m going to utilise GeekBench’s scores for each device model. GeekBench focuses on the devices CPU performance, so we’ll use this as a measurement for the devices capability. We’ll use GeekBench’s single core score as this is relevant for running applications that are lightly threaded such as web applications. We then plot this data to visualise if there is any correlation between INP times and the devices CPU performance. I’ve taken the top 100(ish) Android devices by beacon count during the period. ![](https://m.stacker.news/72181) Probably not a huge surprise to this audience, we see a correlation. As CPU performance decreases we see an increase in INP times. What is interesting to see is the size of the gap between the high powered and lower end devices, as well as some of the startling measurements! The user experience ranges significantly. **How might an iPhone compare?** How might an iPhone compare to this? Although we haven’t got access to a useful set of INP scores we can contrast the scores collected by GeekBench for CPU performance. The highest score for a Samsung Android device is for the Galaxy S24 Ultra, scoring 2135. ![](https://m.stacker.news/72182) In contrast the highest scoring iOS device currently is the iPhone 16 Pro, clocking up an impressive score of 3423. Thats a 60% increase on CPU performance score compared to the S24 Ultra! ![](https://m.stacker.news/72183) In fact, your looking at the iPhone 12 until you find a previous version of the iPhone that scores less than todays fastest Samsung Android device. To add a little more perspective, this device was first sold over 4 years ago! ![](https://m.stacker.news/72184) Using the correlation from above, we can make an extremely strong assumption that INP performance experienced on our latest iPhone is going to be faster than even the latest, greatest Android device on the market. In fact in many cases even our old iPhone stuck in a drawer somewhere would give the best Android devices a run for their money! **So why should we care about this?** Well, because Android users make up a huge slice of the mobile web audience. In the UK, its the greatest slice, with 52% of the market share according to statcounter. If your working in online retail thats a lot of potential customers. ![](https://m.stacker.news/72185) From the sample of data I looked at, only 43.6% of Android page loads came from device models that score 1000 or above on GeekBench (remember an iPhone purchased within the last 4 years is scoring > 2000). So thats ~29% of overall web users that could at best (and most likely worse) have a mobile device 3x times less powerful than the latest high powered iPhone. How does the experience look to them? Are we considering these users whilst developing and testing the applications we’re responsible for? **What can we do?** First thing we can do is understand the conditions that our users are accessing our web applications from. Have you got Real User Monitoring implemented? Do you have granular insight into your users conditions? - Visibility of attributes such as device type, OS and device model. Something like Akamai’s Device Characteristic header is super useful for this. - Usage of the Navigator API to provide insight into things such as device memory (where supported) and hardware concurrency - Network conditions. Through the Network Information API we can gain insight into areas such as the connection type (4g, 3g etc), the connections round trip time (RTT) and user downlink speeds. Again where supported. What is the CrUX data showing for your application? If you haven’t got RUM running this might be a good place to start. Are you proactively monitoring and utilising this data to understand real user experience and typical user conditions? Are you going beyond measuring a high level p75 and looking at the detail beneath? Using this data you can get an insightful picture of the profiles of your users and have visibility of which of these profiles are common and important. With this knowledge we can ensure our development and testing cycles cover real user conditions. We need to dog food our work in these conditions. Sometimes we need to feel the pain to enact positive change. If an engineer is frustrated at the responsiveness of an ‘Add to cart’ button click during development, then they’re probably more likely to investigate why and resolve it. Chrome allows us to easily throttle CPU and network capability via its Performance tab in developer tools. Low powered devices that match your user profiles can be purchased (cheaply!) or services such as BrowserStack used to test the experience on real devices. Other approaches exist, the important thing is there are ways to achieve this. **Wrapping up** Through the data, we’ve seen the gap in user experience between high powered and low end mobile devices. The population of mobile users on these low spec devices is significant and they should not be ignored. They stand as a group we could be (and probably currently are) alienating and lead to missed opportunities if we continue to provide a poor user experience. As online teams we should proactively be understanding the profiles of our users and what conditions they operate within. Looking beyond what sits on our desk and the comfort of expensive tech. Not everyone is lucky enough to own an iPhone. We need to experience our users reality and build this into our software development processes. This way we can build inclusive web applications, improve for the many and open up missed opportunities. originally posted at
nym's avatar
nym 1 year ago
More Countries to Establish Bitcoin Reserves in 2025, Fidelity Says Financial services giant Fidelity predicts several countries will begin stockpiling Bitcoin this year, bringing about broader adoption of the world's oldest crypto. Several nations could establish strategic Bitcoin reserves in 2025 to hedge against “debilitating inflation, currency debasement, and increasingly crushing financial deficits,” Fidelity wrote in a report on Monday. That’s because global leaders are warming up to the crypto following the U.S.’s touted plans to embrace Bitcoin amid growing institutional investor interest, according to Fidelity Digital Assets analyst Matt Hogan. “We anticipate more nation-states, central banks, sovereign wealth funds, and government treasuries will look to establish strategic positions in Bitcoin,” Hogan said in the report. Crypto-curious countries might model their plans for creating Bitcoin reserves after policies from pro-Bitcoin nations such as Bhutan and El Salvador, where government officials have already notched significant returns. El Salvador's Bitcoin holdings are valued at more than $570 million, while Bhutan holds north of $1.1 billion, on-chain analytics platform Arkham Intelligence's data shows. The U.S. has the most significant Bitcoin stockpile of any world nation, valued at roughly $19.3 billion. It’s followed by China, the UK, and Ukraine, which hold $19.2 billion, $6.2 billion and $4.7 billion of the cryptocurrency, respectively. Those massive holdings do not necessarily translate into massive returns, however. Some countries, such as the U.S., have certain requirements for handling or auctioning off Bitcoin, limiting their ability to count the assets as part of their treasuries. Still, there are plenty of incentives for countries to begin holding Bitcoin, particularly as the asset’s price continues to hover around its all-time-high price of $108,000, the report shows. “We may be entering the dawn of a new era for digital assets, one poised to span multiple years — if not decades,” Hogan said in the report. originally posted at
nym's avatar
nym 1 year ago
Just made it out of #LA. It was crazy to say the least.
nym's avatar
nym 1 year ago
Bitcoin Investor Ordered to Reveal Access Codes to $124 Million An early Bitcoin investor sentenced last month to two years in prison for tax fraud related to cryptocurrency sales has been ordered to disclose his secret pass codes so US officials can unlock digital assets now valued at about $124 million. Frank Richard Ahlgren III, who owes the government about $1 million in restitution from the criminal case, must hand over the pass codes and identify any devices used to store them, along with disclosing all his cryptocurrency accounts, US District Judge Robert Pitman ruled Monday in federal court in Austin, Texas. Prosecutors had asked the judge in December to force Ahlgren to disclose the location of at least 1,287 Bitcoin he moved in 2020 through a “mixing” service that jumbled crypto tokens and made them harder to trace. Those tokens, which have more than doubled in value over the past year, are now worth more than $124 million. Ahlgren, who lives in Austin, was the first American convicted of tax crimes tied solely to the sale of cryptoassets. He’s agreed to pay $1 million in restitution to the US to cover tax losses from underreporting capital gains on the sale of $3.7 million in Bitcoin. Prosecutors said he used some of the proceeds to buy a house in Park City, Utah. In their request, prosecutors said Ahlgren’s property “cannot be attached by ordinary physical means.” The government asked “not only to restrain any virtual currency by order of this court, but to obtain the private keys to enable it access so that it cannot be moved by others. Should the private keys be lost or destroyed, the virtual currency is irretrievable.” The judge’s order said that Ahlgren cannot “dissipate,” transfer or sell any property without prior approval of the court, but he can spend on “normal monthly living expenses.” Ahlgren, who pleaded guilty on Sept. 12, was sentenced on Dec. 12. His attorney, Dennis Kainen, said his client will comply with the order. “We will comply with a court directive, or to the extent that we have a question, we will direct it to the court,” Kainen said. “We appreciate the care that Judge Pitman has taken throughout this case.” The case is US v. Ahlgren, 24-cr-00031, US District Court, Western District of Texas (Austin). originally posted at
nym's avatar
nym 1 year ago
A Fool's Errand? The Case Against Holding Bitcoin in a Corporate Treasury https://download.ssrn.com/2025/1/2/5080327.pdf?response-content-disposition=inline&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEH0aCXVzLWVhc3QtMSJHMEUCIAdFXSYq9CB4GUJkpFlqQudHzYDuqJO25QBVNANygNSIAiEA6zrc9f30IH3TPau6hQsRX3f63cZwc30r%2BsAqLlGJmmYqvgUIZRAEGgwzMDg0NzUzMDEyNTciDMzuQZoSe4xwBdp54SqbBc6k%2F2qWafNCLnJ59NZ6m5bzMLEqtDL8BdTt62XU35%2BFaYpHeo3IeaDDEm6WWzjjIOuNlunsDkMedVRlct69U8yThxGALAwQNzDAllW8S%2FiTN7L61e8dG0UwgZPYC0fud7tE23K0gCNTVjhRweCVydJTi4R83XVstxKe7Eo9Ez0kZq6woIN03r%2BErUWrQPY4sFjz7eROi%2FkMTk9nFPUFn2Ori%2B5yv0qd8Jk0UITsU3h%2B6JPdSTU9jGBq0i%2B2eldNKz8usBHHdC8MKwtI5DmuTkJ1Ee%2BrPdmZk5qUEMGwXkKFuZ3xhHnrSomzdFxYDdN1bamjpy%2FmoH9bWdAnUHg51%2Fuofh0xgmWtVKuSWpttoHodyEbAqIWjQXXzlYzZdfFgP05wK0n%2BR%2Ffx36%2FQzxxF03Wjw1BzMfMiAWdohjzwRgz4xweqG0l1isVs9YHzozHjj04Og4baJBUq5WjxIRe2vHImsmMTSi3ymyq2lDxcdBxUWBFJNl4UepmLqs3AcyHtK3Z4jJKkDMXgdQoLWVK99wNYC7teEE8xGuCtHEowwOfjqWO6fFOeOdemZ8c1Hh5Fug5zfWL%2B3uCWMAYusV2dQFyO%2FPu9%2FggIYr%2FIyljX1YFfd4UHiNpQkdAL%2BfyyBVcBkOgjnAXZ81zTVXLJAGsGV1U%2F4z7pbbxKA1G6045y%2B8CsBE6YM%2F9iB092UsmUoLlH5CvbbrqDxSI2LBJHINrKgOPvWJwtxPVBbXrySeaMo2PJ1m6%2BElThbEib8bqZipxIKKlUagKRLOw8oSefLgvJzhDDg1S%2BtBA4V2dblt83C5cIjHytgLedapUeMIXs5yf2XVo%2BsYTbzemXmL0ZpinavAPgy6pv3M1epcJGQcGM9QdBinQ%2Fotw6KiHfig0wqZP2uwY6sQHPyg6w9a9AdyUTL8Xqn4rmw9VVcgm76PpdujQmIGTIBOgzELJQpJ0CsyZ7CNavLywcML0klYSWbUNmXe8m88GnuQ80wUdcIdEe47ydUw2NdPq9YOIu9ze3u6n2HZdnujSq44TX5%2Bp7LNLyyqsaG6RSa2ZWeqTJjmWyT9a%2FWJvMtUP6VBm4UZ92dvtJK4EV0mq3ug6kOm%2FxXvazX7zkL%2F8zd0KPsGXJx4zTkBiaTkPCK0U%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20250107T212521Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAUPUUPRWETXXQTPX7%2F20250107%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=bcda5b5d81d2eb262ef29376e57d0c076993b4a88c8bb333ee48cd51c4204377&abstractId=5080327 Abstract: This paper evaluates the feasibility of using cryptocurrencies, such as Bitcoin, as corporate treasury reserve assets. Through an analysis of price volatility, liquidity constraints, and regulatory uncertainty, the study highlights the significant risks these assets pose. Cryptocurrencies' high volatility and uncertain regulatory landscape are misaligned with the fundamental goals of treasury managementstability, liquidity, and capital preservation. While they may hold potential for speculative investments or strategic ecosystem participation, cryptocurrencies are unsuitable as primary treasury reserves. The findings reaffirm the critical role of traditional instruments, such as Treasury securities, in safeguarding financial stability and supporting corporate operations. VII. Conclusion: This paper demonstrates that incorporating cryptocurrencies into a corporate treasury portfolio introduces significant and unacceptable levels of risk. The extreme volatility of cryptocurrencies, as evidenced by their high Value at Risk (VaR), contradicts the core principles of treasury management, which prioritize stability, liquidity, and risk mitigation. While cryptocurrencies may offer potential benefits as speculative investments or for aligning with specific business strategies, their inclusion in the treasury reserve asset pool is ill-advised. Traditional treasury instruments, such as U.S. Treasury securities, continue to offer adequate risk-adjusted returns and unparalleled liquidity, aligning perfectly with the objectives of capital preservation and operational stability. Corporate treasurers should prioritize these time-tested assets and risk management techniques while carefully evaluating the potential benefits and risks of engaging with cryptocurrencies through alternative strategies, such as blockchain investments or ecosystem participation. originally posted at
nym's avatar
nym 1 year ago
Great summary, thanks.
nym's avatar
nym 1 year ago
</> htmx ~ The future of htmx htmx began life as intercooler.js, a library built around jQuery that added behavior based on HTML attributes. For developers who are not familiar with it, jQuery is a venerable JavaScript library that made writing cross-platform JavaScript a lot easier during a time when browser implementations were very inconsistent, and JavaScript didn’t have many of the convenient APIs and features that it does now. Today many web developers consider jQuery to be “legacy software.” With all due respect to this perspective, jQuery is currently used on 75% of all public websites, a number that dwarfs all other JavaScript tools. We are going to work to ensure that htmx is extremely stable in both API & implementation. This means accepting and documenting the quirks of the current implementation. Someone upgrading htmx (even from 1.x to 2.x) should expect things to continue working as before. here appropriate, we may add better configuration options, but we won’t change defaults. We are going to be increasingly inclined to not accept new proposed features in the library core. People shouldn’t feel pressure to upgrade htmx over time unless there are specific bugs that they want fixed, and they should feel comfortable that the htmx that they write in 2025 will look very similar to htmx they write in 2035 and beyond. We will consider new core features when new browser features become available, for example we are already using the experimental moveBefore() API on supported browsers. However, we expect most new functionality to be explored and delivered via the htmx extensions API, and will work to make the extensions API more capable where appropriate. htmx does not aim to be a total solution for building web applications and services: it generalizes hypermedia controls, and that’s roughly about it. This means that a very important way to improve htmx — and one with lots of work remaining — is by helping improve the tools and techniques that people use in conjunction with htmx. Doing so makes htmx dramatically more useful without any changes to htmx itself. originally posted at
nym's avatar
nym 1 year ago
Federal Reserve Bank of NY "Doomsday Book" 2022 via FOIA https://www.crisesnotes.com/content/files/2023/12/NYFRB-2006.--Doomsday-Book--Searchable.pdf The "Doomsday Book" is a collection of emergency documentation and memo-randa compiled by the Legal Department of the Federal Reserve Bank of New York. It has two purposes in mind. First, it is a ready reference source, containing template documents that must be prepared quickly, and background material that is likely to be particularly relevant to an emergency situation. Second, because all of its documents are on CD-ROMs, it is an operational mitigant against the risk of lost power or connectivity. The Doomsday Book, however, assumes working computers and printers. originally posted at
nym's avatar
nym 1 year ago
End-to-end encrypted, peer-to-peer VPN tunnels for hackers We are a community driven service for the hackers and the truely paranoid that want to establish multiple peer-to-peer and end-to-end encrypted VPN tunnels between their devices to faciliate secure communication between these, no matter where they are located. We only provide the encrypted transport between your devices, you bring everything else yourself. originally posted at