I don't know what language you speak but here is a part of the bill in English This bill would require, on or before July 1, 2028, any business that produces or manufactures 3-dimensional printers for sale or transfer in California to submit to the department an attestation for each make and model of printer they intend to make available for sale or transfer in California, confirming, among other things, that the manufacturer has equipped that make and model with a certified firearm blueprint detection algorithm. If the department verifies a printer make and model is properly equipped, the bill would require the department to issue a notice of compliance, as specified. The bill would require, on or before September 1, 2028, the department to publish a list of all the makes and models of 3-dimensional printers whose manufacturers have submitted complete self-attestations and would require the department to update the list no less frequently than on a quarterly basis and to make the list available on the department’s internet website. The bill, beginning on March 1, 2029, would prohibit the sale or transfer of 3-dimensional printers that are not equipped with firearm blocking technology and that are not listed on the department’s list of manufacturers with a certificate of compliance verification, except as specified. The bill would authorize a civil action to be brought against a person who sells, offers to sell, or transfers a printer without the firearm blocking technology. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260AB2047 Let me point out the statement: > The bill, beginning on March 1, 2029, would prohibit the sale or transfer of 3-dimensional printers that are not equipped with firearm blocking technology and that are not listed on the department’s list of manufacturers with a certificate of compliance verification, except as specified. It seems pretty clear this would prohibit the sale of 3D printers that are not approved by the California DoJ. It's not nice to lie about extremely obvious things.
>Yes it is. It is still exactly as simple as it sounds. If I’m doing math billions of times that doesn’t make the base process somehow more substantial. It’s still math, still a machine designed to predict the next token without being able to reason, meaning that yes, they are just fancy pattern-matching machines. I find this argument even stranger. Every system can be reduced to its parts and made to sound trivial thereby. My brain is still just neurons firing. The world is just made up of atoms. Humans are just made up of cells. >here’s actually a few commonly understood theories of existence that are generally accepted even by laypeople, like, “if I ask a sentient being how many Rs there are in the word ‘strawberry’ it should be able to use logic to determine that there are three and not two,” which is a test that generative AI frequently fails. This shows that the author is not very curious because its easy to take the worst examples from the cheapest models and extrapolate. Its like asking a baby some questions and interpreting humanity's potential on that basis. What's the point of this? > The questions leftists ask about AI are: does this improve my life? Does this improve my livelihood? So far, the answer for everyone who doesn’t stand to get rich off AI is no. I'll spill the real tension here for all of you. There are people who really like their comfy jobs and have got attached to their routine. Their status, self worth and everything is attached to it. Anything that disrupts this routine is obviously worth opposing. Its quite easy to see how AI can make a person's life better - I have so many examples. But that's not what "leftists" care about - its about security of their job. The rest of the article is pretty low quality and full of errors.
> In the latter, if you see `foo` in the body of a function definition you have no idea if it's a simple computation or some sophisticated and complex control structure just from what it looks like. All control structures are reserved as keywords in Haskell and they're not extensible from within the language. In C I can't tell that an if(condition) isn't a function call or a macro without searching for additional syntactic cues, or readily knowing that an if is never a function. I generally operate on syntax highlighting, followed by knowing that an if is always a control structure, and generally never scan around for the following statement terminator or block to disambiguate the two. I've found in general programmers greatly overestimate the unreadability they experience with the ISWIM family to be an objective property of the grammar. It's really just a matter of unfamiliarity. Firstly, I say this as a programmer who did not get started in the ML family and initially struggled with the languages. The truth of the matter is that they simply engage a different kind of mental posture, this is generally true of all language families. Pertinant to that last point and secondly, the sense of "well this is just plain unreadable" isn't unique when going from the Algol family to the ISWIM family. The same thing happens in reverse, or across pretty much any language family boundary. For example: Prolog/Horn clauses are one of the least ambiguous syntax families (less so than even S-expressions IMO), and yet we find Elixir is greatly more popular than Erlang, and the most commonly cited preference reason has to deal with the syntax. Many will say that Erlang is unintuitive, confusing, strange, opaque, etc. and that it's hard to read and comprehend. It's just the same unfamiliarity at play. I've never programmed Ruby, I find Elixir to be borderline incomprehensible while Erlang is in the top 3 most readable and writable languages for me because I've spent a lot of time with horn clauses. I think there's a general idea programmers have where it's believed that when you learn how to program, you are doing so in a universal sense. Once you've mastered one language, the mental structures you've built up are the platonic forms of programming and computer science. But this is not actually the case. More problematically, it's propped up and reinforced when a programmer jumps between two very similar languages (semantically and/or syntactically) and while they do encounter some friction (learning to deal without garbage collection, list comprehensions, etc), it's actually nothing that fundamentally requires building up an entirely different intuitive model. This exists on a continuum in both semantics and syntax. My Erlang example indicates this, because semantically the language is nothing like Prolog, its differentiation from Elixir is purely syntactic. There is no real universal intuition you can build up for programming. There is no point at which you've mastered some degree of fundamentals that you would ever be able to cross language family boundaries trivially. I've built up intuition for more formal language families than is possibly reasonable, and yet every time I encounter a new one I still have to go through the same process of forgetting virtually everything I knew. The only "skill" I could have gotten from doing this is knowing better than to think mastery of J means I'd be able to get comfortable reading complex Forth code.
 Top