↙ time adjusted for second-chance
Okta's NextJS-0auth troubles (joshua.hu)
If you don't like it, that's fine, I won't argue over taste. But your other descriptions of Bach's life deserve to be fact-checked. > He was a nepo baby with a big purse. His brothers, his family, all musicians of note for prominent figures of society. However, his leaning on his long history of music within the family helped polish his work as structured which helped sell it. This interpretation is not particularly historically accurate. Let's investigate: > He was a nepo baby with a big purse. Musicians of the baroque era weren't particularly wealthy or notable. Musical fame wouldn't come until the Classical era. And yes, music was his family trade, but that's how most trades went in that time. His parents both died before he turned ten, so he was mostly raised by his older brother. By all accounts they were not wealthy. So I think the term "nepo baby" is misleading, and "and "with a big purse" is simply incorrect. > His brothers, his family, all musicians of note for prominent figures of society. This is highly overexaggerated. JS Bach had two brothers who survived childhood, and neither was particularly "prominent." Most of his "notable family" were his children, especially CPE Bach. > However, his leaning on his long history of music within the family helped polish his work as structured which helped sell it. Bach's career was one of slow and steady growth. It doesn't appear that he leaned on his connections or family name much. Bach did get some widespread acclaim by the end of his life, but mostly as an organist, not as a composer. His compositions were mostly discarded and ignored for a whole century until Felix Mendelssohn revived interest in his compositions. The cello suites, for example, were lost for nearly two hundred years, and only re-discovered in the 1920's.
One of the things I've always been curious about is how effective diffusion models can be for web and app design. They're generally trained on more organic photos, but post-training on SDXL and Flux have given me good results here in the past (with the exception of text). It's been interesting seeing the results of Nano Banana Pro in this domain. Here are a few examples: Prompt: "A travel planner for an elegant Swiss website for luxury hiking tours. An interactive map with trail difficulty and booking management. Should have a theme that is alpine green, granite grey, glacier white" Flux output: https://fal.media/files/rabbit/uPiqDsARrFhUJV01XADLw_11cb4d2afc6d488ab5c7c233fb25d0ca.jpg NBP output: https://v3b.fal.media/files/b/panda/h9auGbrvUkW4Zpav1CnBy.png --- Prompt: "a landing page for a saas crypto website, purple gradient dark theme. Include multiple sections, including one for coin prices, and some graphs of value over time for coins, plus a footer" Flux output: https://fal.media/files/elephant/zSirai8mvJxTM7uNfU8CJ_109b06b0bde84a49a21816c0c2348487.png NBP output: https://v3b.fal.media/files/b/rabbit/1f3jHbxo4BwU6nL1-w6RI.png --- Prompt: "product launch website for a development tool, dark background with aqua blue and neon gold highlights, gradients" Flux output: https://fal.media/files/zebra/aXg29QaVRbXe391pPBmLQ_4bfa61cc10fc43ddba7102581f799b36.png NBP output: https://v3b.fal.media/files/b/lion/Rj48BxO2Hg2IoxRrnSs0r.png --- Note that this is with a lora I built for flux specifically for website generation. Overall, nbp seems to have less creative / inspired outputs, but the text is FAR better than the fever dream Flux is producing. I'm really excited to see how this changes design. At the very least it proved it can get close to a production quality for output, now it's just about tuning it.
What determines which “average” AI models latch onto? At a pixel level, the average of every image is a grayish rectangle; that's obviously not what we mean and AI does not produce that. At a slightly higher level, the average of every image is the average of every subject every photographed or drawn (human, tree, house, plate of food, ...) in concept space; but AI still doesn't generate a human with branches or a house with spaghetti on it. At a still higher level there are things we recognize as sensible scenes, e.g., barista pouring a cup of coffee, anime scene of a guy fighting a robot, watercolor of a boat on a lake, which AI still does not (by default) average into, say, an equal parts watercolor/anime/photorealistic image of a barista fighting a robot on a boat while pouring a cup of coffee. But it is undeniable that AI images do have an “average” feel to them. What causes this? What is the space over which AI is taking an average to produce its output? One possible answer is that a finite model size means that the model can only explore image space with a limited resolution, and as models get bigger/better they can average over a smaller and smaller portion of this space, but it is always limited. But that raises the question of why models don't just naturally land on a point in image space. Is this just a limitation of training, which punishes big failures more strongly than it rewards perfection? Or is there something else at play here that's preventing models from landing directly on a “real” image?
 Top