Hacker News

Ask HN: What startup/technology is on your 'to watch' list?

For me a couple of interesting technology products that help me in my day-to-day job

1. Hasura 2. Strapi 3. Forest Admin (super interesting although I cannot ever get it to connect to a hasura backend on Heroku ¯\_(ツ)_/¯ 4. Integromat 5. Appgyver

There are many others that I have my eye on such as NodeRed[6], but have yet to use. I do realise that these are all low-code related, however, I would be super interested in being made aware of cool other cool & upcoming tech that is making waves.

What's on your 'to watch' list?

[1]https://hasura.io/

[2]https://strapi.io/

[3]https://www.forestadmin.com/

[4]https://www.appgyver.com/

[5]https://www.integromat.com/en

[6]https://nodered.org/

1006 pointsiameoghan posted 14 days ago699 Comments
699 Comments:
Animats said 13 days ago:

Self-driving cars. Now that the hype is over and the fake-it-til-you-make-it crowd has tanked, there's progress. Slowly, the LIDARs get cheaper, the radars get more resolution, and the software improves.

UE5's rendering approach. They finally figured out how to use the GPU to do level of detail. Games can now climb out of the Uncanny Valley.

The Playstation 5. 8 CPUs at 3.2GHz each, 24GB of RAM, 14 teraflops of GPU, and a big solid state disk. That's a lot of compute engine for $400. Somebody will probably make supercomputers out of rooms full of those.

C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We've known how for a long time.

Electric cars taking over. The Ford F-150 and the Jeep Wrangler are coming out in all-electric forms. That covers much of the macho market. And the electrics will out-accelerate the gas cars without even trying hard.

Utility scale battery storage. It works and is getting cheaper. Wind plus storage plus megavolt DC transmission, and you can generate power in the US's wind belt (the Texas panhandle north to Canada) and transmit it to the entire US west of the Mississippi.

fergie said 13 days ago:

> Self-driving cars. Now that the hype is over and the fake-it-til-you-make-it crowd has tanked, there's progress. Slowly, the LIDARs get cheaper, the radars get more resolution, and the software improves.

Still don't see fully (fully automated) self driving cars happening any time soon:

1) Heavy steel boxes running at high speed in built up areas will be the very last thing that we trust to robots. There are so many other things that will be automated first. Its reasonable to assume that we will see fully automated trains before fully automated cars.

2) Although a lot is being made of the incremental improvements to self-driving software, there is a lot of research about the danger of part-time autopilot. Autopilot in aircraft generally works well until it encounters an emergency, in which case a pilot has to go from daydreaming/eating/doing-something-else to dealing with catastrophy in a matter of microseconds. Full automation or no automation is often safer.

3) The unresolved/unresolvable issue of liability in an accident: is it the owner or the AI who is at fault.

4) The various "easy" problems that remain somewhat hard for driving AI to solve in a consistent way. Large stationary objects on motorways, small kids running into the road, cyclists, etc.

5) The legislative issues: at some point legislators have to say "self driving cars are now allowed", and create good governance around this. The general non-car-buying public has to get on board. These are non-trivial issues.

peteforde said 13 days ago:

You could be right.

My alternative possible timeline interpretation is that two forces collide and make self-driving inevitable.

The first force is the insurance industry. It's really hard to argue that humans are more fallible than even today's self-driving setups, and at some point the underwriters will take note and start premium-blasting human drivers into the history books.

The second force is the power of numbers; as more and more self-driving cars come online, it becomes more and more practical to connect them together into a giant mesh network that can cooperate to share the roads and alert each other to dangers. Today's self-driving cars are cowboy loners that don't play well with others. This will evolve, especially with the 5G rollout.

dahfizz said 13 days ago:

This reminds me that Tesla itself is starting to offer insurance, and it can do so at a much lower rate. I assume this is because:

1) Teslas crash much less often, mostly due to autopilot.

2) Tesla can harvest an incredible amount of data from one of their cars and so they can calculate risk better

mxschumacher said 3 days ago:

how much does a Tesla know about the state of its driver, e.g. to detect distraction, tiredness or intoxication?

Does Tesla see when you speed and increase your premiums?

funcDropShadow said 13 days ago:

Having high-speed steel boxes carrying human lives and what else react on messages from untrusted sources. Hmm. What could go wrong?

peteforde said 12 days ago:

I'm going to ignore the snark and pretend as though this is a good faith argument, because we're on Hacker News - and I believe that means you're a smart person I might disagree with, and I'm challenging you.

I want to understand why being in a high-speed steel/plastic box with humans (overrated in some views) controlled by a computer scares you so much. Is it primal or are you working off data I do not have? Please share. I am being 100% sincere - I need to understand your perspective.

To re-state in brief: (individual) autonomous self-driving tech today tests "as safe as" ranging to "2-10x safer" than a typical human driver. This statistic will likely improve reliably over the next 5-10 years.

However, I am talking about an entire societal mesh network infrastructure of cars, communicating in real-time with each other and making decisions as a hive. As the ratio flips quickly from humans to machines, I absolutely belive that you would have to be quantifiably unsane to want to continue endangering the lives of yourself, your loved ones and the people in your community by continuing to believe that you have more eyes, better reaction and can see further ahead than a mesh of AIs that are constantly improving.

So yeah... I don't understand your skepticism. Help me.

amurthy1 said 12 days ago:

The risk is a bad actor could hack into this network and control the cars

peteforde said 11 days ago:

Security-minded thinking dictates that we should move forward with the assumption that it will happen. The important outcome is not "we can't do anything as a society because bad men could hurt us" but "how do we mitigate and minimize this kind of event so that progress can continue".

Look: I don't want my loved ones in the car that gets hacked, and I'm not volunteering yours, either. Sad things are sad, but progress is inevitable and I refuse to live in fear of something scary possibly happening.

It is with that logic that I can fly on planes, ride my bike, deposit my money in banks, have sex, try new foods and generally support Enlightenment ideals.

I would rather trust a mesh of cars than obsess over the interior design of a bunker.

machinehermit said 13 days ago:

Totally agree.

If all the cars in the area know one of the cars is about to do something and can adjust accordingly then it will be so much safer than what we have now it is almost unimaginable.

It would seem at some point in the future, people are not going to even want to be on the road with a human driver who is not part of the network.

mrweasel said 13 days ago:

The hype around self driving cars is still very much around. I tend to view any debate about full autonomous cars (level 5) as unserious if they work with less than a 15 - 20 year time horizon.

roenxi said 13 days ago:

In 2014 top humans could give a good Go playing AI 4 stones (a handicap that pushes games outside of being between comparable players).

In 2017 AlphaGo could probably give a world champion somewhere between 1 and 3 stones.

From an algorithmic perspective the range between "unacceptably bad" and superhuman doesn't have to be all that wide and it isn't exactly possible to judge until the benefit of hindsight is available and it is clear who had what technology available. 15-20 years is realistic because of the embarrassingly slow rate of progress by regulators, but we should all feel bad about that.

We should be swapping blobs of meat designed for a world of <10kmph for systems that are actually designed to move quickly safely. I've lost more friends to car accidents than any other cause - there needs to be some acknowledgment that humans are statistically unsafe drivers.

KKKKkkkk1 said 13 days ago:

When you're mentioning AlphaGo, you're committing a fallacy that's so famous that it has a name and a wikipedia page (https://en.wikipedia.org/wiki/Moravec%27s_paradox). The things that are easy for humans are very different from those that are easy for robots.

said 12 days ago:
[deleted]
mrweasel said 13 days ago:

I don’t disagree that computer are better driver, under certain conditions, but that’s not the point.

I can drive myself home relatively safely in conditions where the computer can’t even find the road. We’re still infinitly more flexible and adaptable than computers.

It will be at least 20 years before my car will drive me home on a leaf or snow covered road. Should I drive on those roads? Most likely not, but my brain, designed for <10 km/h speeds, will cope with the conditions in the vast majority of cases.

GuB-42 said 13 days ago:

> Its reasonable to assume that we will see fully automated trains before fully automated cars.

https://en.wikipedia.org/wiki/Paris_M%C3%A9tro_Line_14

Fully automated since 1998, and very successful.

abainbridge said 13 days ago:

There were automated railways 30 years before that too. https://en.m.wikipedia.org/wiki/Automatic_train_operation

Breza said 7 days ago:

I've lived in Washington DC long enough to remember back when our subway was allowed to run in (mostly) automated mode. There was a deadly accident that wasn't directly the fault of the Automatic Train Control (the human operator hit the emergency brake as soon as she saw the parked train ahead of her) but it still casts light on some of the perils of automation.

The_rationalist said 13 days ago:

Another hard problem for AI is to "see" through rain

dahfizz said 13 days ago:

That's hard for humans too. I think we need to give up on the idea that fully autonomous driving will be perfect.

The_rationalist said 13 days ago:

I'm obviously talking about matching human performance and this is the hard problem

machinehermit said 13 days ago:

There is also an easy solution of just staying put.

I have driven in snow a few times that I was not sure I was even on the road. Or the only way I knew I was going the right direction was because I could vaguely see the break lights of the car going 15mph in front of me through snow.

That is an easy problem to solve though because I simply should not have been driving in that.

mitchdoogle said 10 days ago:

Humans are pretty terrible at driving in rain and snow as well.

friendlybus said 12 days ago:

We already have fully automated trains. The DLR in London.

I am optimistic about solving those problems. Regulation always comes after the tech is invented. Cars have more opportunity to fail gracefully in an emergency; pull off onto the shoulder and coast to a stop or bump into an inaminate object.

ianai said 13 days ago:

Of course the owner is to blame.

dahfizz said 13 days ago:

What if it's a rental or a lease? In a fully automated car, that's basically a taxi. I don't think I should bear the responsibility of my taxi driver.

If/when we get fully automated cars, this kind of driverless Uber will become extremely common. Who bears the risk then? This is a complicated situation that can't be boiled down to "Of course the owner is to blame"

LargoLasskhyfv said 12 days ago:

That is the most puzzling thing to me. Not from a technical, but societal ( https://en.wikipedia.org/wiki/Tragedy_of_the_commons ) point of view. Compare with public mass transit, except of Singapore, Japan, it is mostly dirty, in spite of cleaning staff working hard, and other people around. In a Taxi/Uber you have the driver watching, and other rentals are usually inspected after, and immediately before next rent out, just to make sure.

Not so in car-sharing pools, and there it's already materializing as a problem. How do you solve that with your 'robo-cab'? Tapping on dirty/smelly in your app, send back to garage? What if you notice it only 5 minutes after you started the trip, already robo-riding along? What if you have allergies against something the former customer had on/around it? Or was so high on opioids, that even a touch of the skin could make you drop? As can, and did happen. How do you solve for that without massive privacy intrusions? Or will they be the "new normal" because of all that Covid-19 trace app crap?

joshspankit said 13 days ago:

Counterpoint: In a fully-autonomous situation, of course the AI is to blame.

ianai said 13 days ago:

I think we need to consider that case when/if it happens. For the foreseeable future there needs to be a responsible driver present.

To go contrary to this is to invite outright bans of the tech.

ekianjo said 13 days ago:

> The Playstation 5. 8 CPUs at 3.2GHz each, 24GB of RAM, 14 teraflops of GPU, and a big solid state disk. That's a lot of compute engine for $400. Somebody will probably make supercomputers out of rooms full of those.

Mmm, this sounds like exactly what people said at the time the PS3 was going to be released, and I can only recall of one example where the PS3 was ever used in a cluster and that probably was not that very useful in the end.

dangus said 13 days ago:

This exactly.

The PS5 and Xbox One X are commodity PC hardware, optimized for gaming, packaged with a curated App Store.

Sony also won’t just sell you hundreds or thousands of them for some kind of groundbreakingly cheap cluster. They will say no, unless you’re GameStop or Walmart.

Everyone with a high-mid-range PC already has more horsepower than a PS5 and it’s not doing anything particularly innovative or groundbreaking.

The PS5 is going to equivalent to a mid-range $100 AMD CPU, something not as good as an RTX 2080 or maybe even an RTX 2070, and a commodity NVME SSD (Probably cheap stuff like QLC) that would retail for about the same price as a 1TB 2.5” mechanical hard drive. It is not unique.

Data center servers optimize for entirely different criteria and game consoles do not make sense for anything coming close to that sort of thing. For example, servers optimize for 24/7 use and high density. The PS4 doesn’t fit in a 1U rack. It doesn’t have redundant power. Any cost savings on purchase price is wasted on paying your data center for the real estate, no joke. Then when the console breaks you have to pay your technician $100/hour in compensation, benefits, and taxes to remove and replace it.

Reelin said 13 days ago:

I think you've vastly understating current hardware prices.

An 8 core 2nd generation Zen chip appears to retail for $290. The PS5 reportedly has a custom GPU design, but for comparison a Radeon 5000 series card with equivalent CU count (36) currently retails for $270 minimum. Also, that GPU only has 6GB GDDR6 (other variants have 8GB) but the PS5 is supposed to have 16GB. And we still haven't gotten to the SSD, PSU, or enclosure.

Of course it's not supposed to hit the market until the end of the year - perhaps prices will have fallen somewhat by then? (Also I don't expect Sony to be making any money off the hardware at those prices, so I agree that they're unlikely to sell them to anyone who won't buy games for them.)

Johanx64 said 13 days ago:

Ryzen 2nd-gen 2700 is out of stock currently, but it used to go for as low as $135-150, it's absolutely not a $290 CPU (perhaps you're looking at 3rd gen ryzen? 3700x?).

I haven't looked at what a GPU equivalent would be, but by the time PS5 hits the market, I doubt going to be anywhere near 270$.

As long as there aren't any supply chain disruptions (as there are now).

It appears that the real killer is the hardware-accelerated decompression block pulling the data straight from SSD into CPU/GPU memory in the exact right location/format without any overhead, which isn't available on commodity PC hardware at the moment.

Reelin said 11 days ago:

Ack my bad! I wrote "2nd generation Zen" but I meant to write "Zen 2" which is (confusingly) the 3rd generation.

I found some historical price data and I'm surprised - the 2700 really was $150 back in January! Vendors are price gouging the old ones now, and the 3700X is currently $295 on Newegg.

As far as the GPU goes, an 8GB from the 500 series (only 32 CU, released 2017) is still at least $140 today. And noting the memory again, that's 8GB GDDR5 versus (reportedly) 16GB GDDR6 so I'm skeptical the price will fall all that much relative to the 6GB card I mentioned.

SomeoneFromCA said 12 days ago:

Zen2 = Ryzen 3rd, not 2nd.

hobofan said 13 days ago:

> Also I don't expect Sony to be making any money off the hardware at those prices, so I agree that they're unlikely to sell them to anyone who won't buy games for them.

I think console hardware cost is generally budgeted at a slight loss (or close to break-even) at the beginning of a console generation, and then drops over the ~7 year lifespan.

said 13 days ago:
[deleted]
FridgeSeal said 13 days ago:

> Everyone with a high-mid-range PC already has more horsepower than a PS5 and it’s not doing anything particularly innovative or groundbreaking.

The fact that it can stream 5.5gb/s from disk to RAM says otherwise. Commodity hardware, even high end m.2 drives can’t match that.

* it’s my understanding that it directly shares RAM between the CPU and the GPU which means way less latency and more throughput.

reanimated said 13 days ago:

There are high end drives on the PC market what can match and surpass that, but they are like $2000+ :) Linus talked about that topic last week: https://youtu.be/8f8Vhoh9Y3Q?t=1607

FridgeSeal said 10 days ago:

Watching some of that, and doing a bunch odd reading on the PS5, it seems that “some drives” can kind of get close, but the fact that the PS5 physically has custom, dedicated hardware that can directly move data from the SSD straight into shared CPU-GPU memory with minimal input/work from the CPU, and that’s a fundamental architectural advantage PC’s don’t have (yet).

I would sure like to see some architectural upgrades like this in PC/server world though: I’d love an ML workstation where my CPU-GPU ram is shared and I can stream datasets directly into RAM at frankly outrageous speeds. That would make so many things so much easier.

kayoone said 13 days ago:

While the individual components might not be as fast as a high end PC, they way the system is architected and the components are connected to each other (eg. super high bandwidth from SSD to CPU/GPU memory) gives it some advantages especially for gaming. For the price it certainly is impressive.

reanimated said 13 days ago:

New console releases don't need to be particularly innovative or groundbreaking. They greatly improve the amount of the resources available to the game-devs and game development is console centric in the first place. Usually after new console launches game visual quality jumps quite noticeably in a couple of years. Its beneficial for everyone even if you are not console gamer yourself.

onion2k said 13 days ago:

Then when the console breaks you have to pay your technician $100/hour in compensation, benefits, and taxes to remove and replace it.

No, you pay your minimum wage junior IT assistant to unplug the broken one and plug in a new one. That's the point of commodity hardware - it's cheaper to buy and cheaper to support.

noir_lord said 12 days ago:

Faster consoles are good if your a PC gamer though since games end up deployed for all three and consoles are the retard on progress.

GTA 6 with the hardware in the new consoles will likely be spectacular.

marcelluspye said 13 days ago:

Are you referring to the time the US air force built a cluster out of 2000 PS3s? Seems good.

vanderZwan said 11 days ago:

Well that just goes to show that you shouldn't trust hearsay, even if that hearsay is your own vague recollection of something. There is an Wikipedia page dedicated to the ways the PS3 was used as a cheap HP computing cluster:

https://en.wikipedia.org/wiki/PlayStation_3_cluster

The only reason that stopped happening was because Sony killed it on purpose:

> On March 28, 2010, Sony announced it would be disabling the ability to run other operating systems with the v3.21 update due to security concerns about OtherOS. This update would not affect any existing supercomputing clusters, due to the fact that they are not connected to PSN and would not be forced to update. However, it would make replacing the individual consoles that compose the clusters very difficult if not impossible, since any newer models with the v3.21 or higher would not support Linux installation directly. This caused the end of the PS3's common use for clustered computing, though there are projects like "The Condor" that were still being created with older PS3 units, and have come online after the April 1, 2010 update was released.

And in case you were wondering, the reason Sony killed was because they sell their consoles at a loss and make up for that through game sales (which indirectly is what made it so affordable for people interested in cluster computing). If the PS3 is merely bought for creating cluster computers they would end up with a net loss (Nintendo is the only console maker that sells consoles at a profit)

szszrk said 13 days ago:

Ps3 was used as Debian clusters in my university and would be in larger scale if not for a) huge cost in my country at start b) medium availability c) "other systems" fiasco.

There was significant interest in grid scholar community.

devbug said 13 days ago:

The key differentiator is x86 vs PPC and 1 TB/s bus.

midnightclubbed said 13 days ago:

PPC was ok, the killer was that you had to write code specifically for the Cell (co)processors and their limited memory addressing if you wanted the promised compute performance.

jjoonathan said 13 days ago:

> 1 TB/s bus

Is that the new marketing term for shared VRAM?

richardw said 13 days ago:

Most of that power came from the Cell processor, which was awesome but supposedly hard to develop for. I assume they’ve learned that lesson.

joshspankit said 13 days ago:

If by “learned” you mean changed focus from making it (uniquely) awesome and instead making it easier to develop for: yes.

And if by “learned”, you also mean “were convinced by Mark Cerny“ (who is still leading design of the PS5), then also yes.

masklinn said 13 days ago:

> The Playstation 5. 8 CPUs at 3.2GHz each, 24GB of RAM, 14 teraflops of GPU, and a big solid state disk. That's a lot of compute engine for $400. Somebody will probably make supercomputers out of rooms full of those.

That seems like a straight waste of time for lightly customised hardware you'll be able to get off the shelf. And unless they've changed since, the specs you quote don't match the official reveal of 16GB and 10 teraflop. Not to mention the price hasn't been announced, the $400 pricepoint is a complete guess (and pretty weird given the XbX is guessed for 50% more… for a very similar machine).

friendlybus said 13 days ago:

Gpu solved LOD won't save video games from the uncanny valley. In some cases it will make it worse. It makes for nice statues and static landscapes though.

GuB-42 said 13 days ago:

> C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We've known how for a long time.

The crowd that use C++ needs raw pointers sometimes, and you can't really prevent bad pointers and buffer overflows when they are used. There is a reason why Rust, which goal is to be a safer C/C++, supports unsafe code.

Smart pointers is a very good thing to have in the C++ toolbox, but they are not for every programmer. Game programmers, if I am not mistaken, tend to avoid them, as well as other features that make things happen between the lines, like RAII and exceptions.

The good thing about that messiness that is modern C++ is that everything is here, but you can pick what you want. If you write C++ code that looks like C, it will run like C, but if you don't want to see a single pointer, you have that option too.

IC4RUS said 13 days ago:

Maybe it was a purposeful reference, but PlayStations have indeed been linked to create a supercomputer: https://phys.org/news/2010-12-air-playstation-3s-supercomput...

jonathankoren said 13 days ago:

Even before that link. The PS2 Linux kit was used back in 2003.

https://web.archive.org/web/20041120084657/http://arrakis.nc...

zhenchaoli said 13 days ago:

> C++ getting serious about safety. Buffer overflows and bad pointers should have been eliminated decades ago. We've known how for a long time.

Would love some links to read over weekend. Thanks!

Teknoman117 said 13 days ago:

Things like:

- std::string_view

- std::span

- std::unique_ptr

- std::shared_ptr

- std::weak_ptr (non owning reference to shared_ptr, knows when the parent is free'd)

- ranges

- move semantics

- move capture in lambdas

- std::variant

- std::optional

To be honest, learning rust has made me a better c++ programmer as well. Having to really think about lifetimes and ownership from an API perspective has been really neat. It's not so much that I wasn't concerned about it before, more that I strive to be more expressive of these conditions in code.

midnightclubbed said 13 days ago:

Seconded that dipping a toe in to Rust has changed how I think about C++ and object ownership. Loose pointers and copy constructors now make me feel un-clean! Move ftw.

However I feel like most of the heavy lifting features came with C++11. Span, optional, variant and string_view are nice additions to the toolkit but more as enhancements rather than the paradigm shift of C++11 (move, unique_ptr, lambdas et-al).

asveikau said 13 days ago:

> Seconded that dipping a toe in to Rust has changed how I think about C++ and object ownership. Loose pointers and copy constructors now make me feel un-clean! Move ftw.

It's funny, because while it's certainly become more influential lately, that subculture existed as a niche in the C++ world before Rust and before C++11. So much so that when I first heard about Rust I thought "these are C++ people."

Yoric said 13 days ago:

The original (and long dead) Rust irc channel used to be full of C++ people chatting with OCaml people. Those were the days :)

sitkack said 12 days ago:

That entirely matches my idea of how Rust came to be, some sort of pragmatic co-development across two different philosophical camps. In many ways, Rust is a spiritual successor to both languages, if only it was easier to integrate with C++.

typon said 13 days ago:

lol i start most of my big objects by deleting the copy constructor and adding a clone member func :P

jfkebwjsbx said 13 days ago:

string_view, span and ranges are not conductive to safety, quite the opposite.

dnpp123 said 13 days ago:

Yeah, if anything, C++ is getting less serious about safety by piling features over features. Just write Rust instead.

typon said 13 days ago:

can you explain why you think that?

Animats said 13 days ago:

Things like "It is the programmer's responsibility to ensure that std::string_view does not outlive the pointed-to character array."

"string_view" is a borrow of a slice of a string. Since C++ doesn't have a borrow checker, it's possible to have a dangling string_view if the string_view outlives the underlying string. This is a memory safety error.

Rust has educated people to recognize this situation. Now it's standard terminology to refer to this as a borrow, which helps. Attempting to retrofit Rust concepts to C++ is helping, but often they're cosmetic, because they say the right thing, but aren't checked. However, saying the right thing makes it possible to do more and more static checking.

typon said 13 days ago:

But surely it's a step towards more safety. Compare to passing char * around or ref/ptr to string.

Sure C++ doesnt have a borrow checker but these types encourage the idea of "reifying" lack of ownership rather than keeping it ad hoc

benibela said 13 days ago:

I have always used Pascal for memory safe strings. Reference counted, mutable xor aliased, bound checked: their safety is perfect.

Unfortunately there is no string view, so you need to copy the substrings or use pointers/indices. I tried to build a string view, but the freepascal compiler is not smart enough to keep a struct of 2 elements in registers.

temac said 13 days ago:

You don't infer potential ownership from a C++ ref. Likewise for char* strings unless it is to interface with a C api, in which case you will keep it anyway.

radarsat1 said 13 days ago:

Wow. I hadn't read up much on string_view but I guess I assumed it required a shared_ptr to the string. Odd decision not to.

jfkebwjsbx said 13 days ago:

Rust hasn’t "educated people" about "borrowing".

Lifetime management has always been there for any developer dealing with resources in any language. Over the years, languages and specialized extensions and tools have offered different solutions to help with that problem.

What Rust has brought is a system that checks explicit tracking embedded into a mainstream language.

phaedrus said 13 days ago:

Static analysis tools like PVS Studio are amazing. Software verification like CompCert where the compilation includes a certificate of correctness are farther away for C++ but will someday be usable for it.

axegon_ said 13 days ago:

I never really paid attention to consoles (not a gamer in any way) but the ps5 sounds impressive. Shame Sony have a very Apple-like approach to their products and lock everything up. If they bundled up that hardware with linux support, sales would go through the roof and into orbit. I'd personally get a bunch of these and build myself a cluster.

stiray said 13 days ago:

Sony is selling them with little to no profit as they expect to earn on games. Guess why their capabale and cheap hardware is locked down to avoid using it for anything except playing bought games ;)

Anyway you can jailbreak ps4 to 5.0.5 firmware and there are unpublished exploits in existence that are waiting for ps5 to be released.

battery_cowboy said 13 days ago:

Looks like I found that "home server" to replace my over-use of cloud resources that I've been looking for!

stiray said 13 days ago:

Well, let me recomend something else, check asrock mini-itx motherboards with on-board cpu. You can get those for ~150 euros, throw in some ram (~60 euros) and some disk (100euro) + some chasis (Phenom mini-itx for instance, ~100euros). For home server this will work great :)

I am running home server (100% self hosted including emails) with J1900-itx motherboard with 20Tb of disk space (zraid) for years. No need to bother with ps4/5.

reanimated said 13 days ago:

Well, your described bundle would be over 400€ then you can purchase used PS4 at least half that price and even cheaper.

stiray said 13 days ago:

Yes, but ps4 is gaming rig and you will have to jailbreak it every reboot. It depends on what you intend to run, raspberry pi 4 and sd card could be just more then enough for some people. Those prices were over the thumb, my motherboard with cpu is there since 2014 and is now $60 while it is more than enough and with going minimal (ram, chasis, disk - with ps4 you will get 1tb at most) you can pull it of under ps4 price. At the end, if you divide those 400 euros by 6 years, you are at price of 5.55 euro / month (not to mention you can reuse chasis and disks when upgrading) and it is low power setup (measured with 4 disks was 33 watts).

Jailbreaking could be nice for other <wink> unnamed purposes.

SomeoneFromCA said 12 days ago:

I recently bought an Ivy Bridge CPU low power CPU + motherboard for $35 and 8 GiB of ram for $25. No need to buy new hardware if you can do away with old.

codeisawesome said 13 days ago:

Maybe they sell the H/W at a loss (especially considering R&D + Marketing spend) - and the real strategy is to turn a profit on PSPlus, licensing and taking a cut out of game distribution. If that's the case... you or me building a linux cluster will actually hurt them =)

mcdevilkiller said 13 days ago:

Not maybe, that's exactly what they do.

cmckn said 13 days ago:

The PS3 had dual-boot support for Linux early on, for a couple years after launch. It was removed in a software update a week or two after I decided to try it. I don't see Sony doubling back on that one, but you never know.

zamalek said 13 days ago:

> That's a lot of compute engine for $400.

So excited for this as a PC gamer, hardware prices are going to have to plummet. I don't think supercomputers a likely, the PS2 was a candidate because there was [initially] official support for installing Linux on the thing. Sony terminated that support and I really can't imagine them reintroducing it for the PS5.

ethbro said 13 days ago:

Sony's only interest is to do a single deployment, using a customized OS and firmware, and then get as many articles out of the project as possible.

They have zero incentive to subsidize supercomputers. They're in the business of trading hardware for royalty, store, and subscription payments.

jjoonathan said 13 days ago:

And if they do it would be wise not to trust them, because dropping support for advertised features with hardware-fused irreversible software updates is SOP at this point. FFS, they even dropped support for my 4K screen in an update and I wound up playing the back half of Horizon Zero Dawn in 1080p as a result.

sandov said 13 days ago:

What? How and why did they drop support for your screen?

jjoonathan said 13 days ago:

Yes, really. They up and dropped an entire HDMI mode used by "older" 4K displays.

A cynic would say they wanted to boost sales on newer displays, but it seems more likely that a bug of some kind came up in a driver (I was unaware of any problems, but that's hardly proof of anything) and they just decided it was easier to cut support of those displays than to fix the problem.

Support forums filled with complaints by the dozens of pages, but Sony didn't care, because why should they? I'm sure somebody did the calculation that said we weren't a big enough demographic to matter.

FridgeSeal said 13 days ago:

> Somebody will probably make supercomputers out of rooms full of those.

So I learnt very recently that the PS5 has a cool approach where all memory is shared directly between the CPU and the GPU (if this is wrong someone please correct me). I would be really interesting to see how well the GPU in this could handle DL specific workloads, and if necessary, could it be tweaked to do so?

Because if so, that could be an absolute weapon or a DL workstation. If it does turn out to be feasible, I think it could be very easily justifiable to buy a few of those (for less than it would cost you to buy a major cloud provider GPU equipped instance for a couple of months) and have a pretty capable cluster. Machines get outdated or cloud provider cost comes down? Take them home and use them as actual gaming consoles. Win win.

CraftThatBlock said 13 days ago:

This is how APUs (which is what the PS4/PS5/Xbox) use memory, the RAM is shared between the graphics and compute units. This can be an advantage since memory is quickly shared between the two (for example loading textures, etc).

This is also useful in computers since adding more RAM also adds more VRAM

trebligdivad said 13 days ago:

Self driving cars: Yes, but only if they really work - now is the perfect time to sell them if they did; for those of us who normally use public transport but don't currently like the thought of sitting in a petri dish for 2 hours. Utility scale battery storage: Yes but it needs tech improvements to store LOTS of energy; the flow batteries might do it if the hype is true - but currently the UK wholesale electricity price is £-28/MWh due to a wind/solar glut and a quiet weekend, so if anyone wants to get paid to store that energy the opportunity is there.

As for C++ safety; I find modern C++ hard to read - are they going to be able to do safety but end up with something that's actually harder to use/read than Rust?

kabacha said 13 days ago:

Can't help but laugh whenever I read about self driving cars predictions like this sorry.

My GPS can hardly navigate most of the world I'm not really excited and if the only criteria of self driving car is self driving on a high way then color me uninterested.

I don't think self driving cars will be able to traverse majority of the world's traffic anytime soon. The road is just too difficult to maintain for human free driving with the exception of few major block cities on America which makes the whole ordeal pretty boring.

virgilp said 13 days ago:

Self driving cars don’t need to be 100% autonomous in all possible scenarios in order to be useful. Self-driving reliably on the highway? Hell yes I’d take that (just think of trucks - having a driver to the highway is so much cheaper than having someone drive it cross-country). Self driving reliably in a few major cities? Oh, you mean cheap robotic taxi?

frellus said 13 days ago:

Spot on. This is our approach at Ghost Locomotion - L3 is pretty darn good, and highways are actually pretty standards driven, unlike local roads or cities.

https://medium.com/ghost-blog/the-long-ignored-most-obvious-...

https://medium.com/ghost-blog/the-future-of-transportation-i...

kabacha said 13 days ago:

I agree with you it's just as I said - it's not what we've been sold and autopilot on highway is kinda boring.

reanimated said 13 days ago:

I think its incorrect way to view it like its either full self driving or none at all. We are getting incremental benefits from this already: cars are correcting and preventing driver errors. They make instant trajectory corrections or complete stop and prevents a huge crashes. With time they will be better and better at recognising traffic lights, road signs, sudden unforeseen situations and so on and that way driving safety will improve exponentially even before self driving capabilities.

Havoc said 12 days ago:

Nice post. I think the PS5 read might be a little off though. The pro edition is likely to be 600ish USD and come in a little lower than 14 of the tflops.

woah said 12 days ago:

Why do they need lidar in the first place? Humans do fine with stereoscopic vision

beyondcompute said 13 days ago:

“Fake it till you make it” is precisely how it will be solved

abhinai said 13 days ago:

“Fake it till you make it” strategy works when you know how to make something but haven't made it yet. The strategy falls apart when people try to fake having solved hard open research problems.

tangjurine said 12 days ago:

> 8 CPUs at 3.2GHz each

8 CPU cores at 3.2GHz each?

aj-4 said 13 days ago:

Tesla covering 3/6... stock price is definitely still low

threeseed said 13 days ago:

I would never trust any self driving car that didn't use LiDAR. It's an essential sensor for helping to fix issues like this:

https://www.youtube.com/watch?v=1cSw4fXYqWI&feature=emb_logo

And it's not contrived since we've seen situations of Telsa Autopilot behaving weirdly when it sees people on the side of billboards, trucks etc.

KKKKkkkk1 said 13 days ago:

LIDAR vs camera is a red herring. The fact that Elon and his fan club fixate on this shows you how little they understand about self driving. The fundamental problem is that there is no technology that can provide the level of reasoning that is necessary for self driving.

Andrej Karpathy's most recent presentation showed how his team trained a custom detector for stop signs with an "Except right turn" text underneath them [0]. How are they going to scale that to a system that understands any text sign in any human language? The answer is that they're not even trying, which tells you that Tesla is not building a self-driving system.

[0] https://youtu.be/hx7BXih7zx8?t=753

aeternum said 13 days ago:

A surprising number of human drivers would also not be able to 'detect' that 'except right turn' sign. Only 3 states offer driver's license exams in only English and California for example offers the exam in 32 different languages.

Even so, it is quite possible to train for this in general. Some human drivers will notice the sign and will override autopilot when it attempts to stop, this triggers a training data upload to Tesla. Even if the neural net does not 'understand' the words on the sign, it will learn that a stop is not necessary when that sign is present in conjunction with a stop sign.

reanimated said 13 days ago:

They have hired most of the industry talents, so I think it's quite silly to state about how little they understand about this. In my opinion nobody except Tesla and Waymo has more knowledge of this field.

anchpop said 13 days ago:

Why does it need to work in any human language? It isn't as if self driving cars need to work on Zulu road signs before they can be rolled out in California. I'd be surprised if they ever needed to train it on more than 4 languages per country they wanted to roll out to.

aeternum said 13 days ago:

If I were driving I'd definitely stop for the person in the road projection at https://youtu.be/1cSw4fXYqWI?t=85

LiDAR also isn't a silver bullet. Similar attacks are possible such as simply shining a bright light at the sensor overwhelming the sensor as well as more advanced attacks such as spoofing an adversarial signal.

Reelin said 13 days ago:

I don't think it's attacks we need to worry about (there's even an XKCD about dropping rocks off of overpasses). The issue is that without good depth and velocity data (so probably LiDAR) there are lots of fairly common situations that an ML algorithm is likely to have trouble making sense of.

jaxn said 13 days ago:

I use autopilot every day. It stops for stoplights and stop signs now.

rootusrootus said 13 days ago:

Sometimes when on the freeway behind a construction truck with flashing lights.

rainyMammoth said 13 days ago:

It is misleading. driving on the highway is by far the easiest part of self driving.

Going from 3 nines of safety to 7 nines is going to be the real challenge.

jaxn said 12 days ago:

There aren't stoplights on the highway. I'm talking about in-city driving.

sedgjh23 said 13 days ago:

Humans don’t need LiDAR to recognize billboards

threeseed said 13 days ago:

Self driving cars can't rapidly move their cameras in multiple spatial directions like humans do on a continuous basis.

Also we have a pattern and object detection computer behind our eyes that nothing on this planet even remotely comes close to.

beambot said 13 days ago:

People don't have eyes in the back of their heads. Self-driving cars don't get drunk or distracted by cell phones. Comparing humans with AVs is apples & oranges. The only meaningful comparison is in output metrics such as Accidents & Fatalities per mile driven. I'd be receptive to conditioning this metric on the weather... so long as the AV can detect adverse conditions and force a human to take control.

aeternum said 13 days ago:

Chimps have us beat when it comes to short-term visual memory (Humans can't even come close).

Mantis shrimp have us beat when it comes to quickly detecting colors since they have twelve photoreceptors vs. our three.

Insects have us beat when it comes to anything in the UV spectrum (we're completely blind to it). Many insects also cannot move their eyes but are still have to use vision for collision detection and navigation.

Birds have us beat when it comes to visual acuity. Most of them also do not move their eyeballs in spacial directions like we do but still have excellent visual navigation skills.

nwallin said 13 days ago:

Humans have visual processing which converts the signals from our three types of cones into tens to hundreds of millions of shades of color. Mantis shrimp don't have this processing. Mantis shrimp can only see 12 shades.

Human color detection is about six orders of magnitude greater than mantis shrimp's.

aeternum said 13 days ago:

Right, but the theory is that they have us beat when it comes to speed since they are directly sensing the colors whereas we are doing a bunch of post-processing.

monadgonad said 13 days ago:

I think the point was that brains are the best pattern and object detection computers, not necessarily just human brains.

CamperBob2 said 13 days ago:

Also we have a pattern and object detection computer behind our eyes that nothing on this planet even remotely comes close to.

Not defending those who say that LIDAR isn't useful/important in self-driving cars, but this assertion is only marginally true today and won't be true at all for much longer. See https://arxiv.org/pdf/1706.06969 (2017), for instance.

varjag said 13 days ago:

Humans have about 2° field of sharp vision. Computers with wide angle lenses don't have to oscillate like the eyes do.

said 13 days ago:
[deleted]
reitzensteinm said 13 days ago:

Humans are underrated.

bsder said 13 days ago:

On driving? I would posit that most humans are vastly overrated.

I suspect if you crunch the numbers, accidents are going to be above normal for a while after Covid-19 reopenings.

Anecdotally, I'm seeing people doing mind-blowingly stupid things on the roadways right now. It seems like people have forgotten how to drive. I suspect the issue is that people rely too much on other cars to cue them how to behave and the concentration is too low.

(It could also be that a constant accident rate cleans off the worst of the drivers with regularity as they get into accidents and then wind up out of circulation. I really hope that isn't why ... that would be really depressing.)

dialamac said 13 days ago:

No they’re underrated. We all know the stats. Driving isn’t the safest activity. Having said that there’s a lot of wishful thinking that the current state of ML can do any better if we were to just put them on the roads today as-is.

jhallenworld said 13 days ago:

You are right, for example, humans don't need anywhere near the amount of training data that AIs need.

enahs-sf said 13 days ago:

I learned to drive a car when I was 13. My older cousin took me to warped tour, got hammered and told me I had to drive home. I didn’t know what a clutch was, let alone a stick shift. After stalling in the parking lot a couple of times, I managed to drive us from Long Beach all the way back to my parents house in Pasadena. Love to see an AI handle that cold start problem.

williadc said 13 days ago:

Cold start? You had 13 years!

erik_seaberg said 13 days ago:

Self-driving cars could work more like a hive mind. Humans can share ideas, but not reflexes and motor memory. So we practice individually, and we're great at recognizing moving stuff, but we never get very good at avoiding problems that rarely happen to us.

And we know we shouldn't drive tired or angry or intoxicated but obviously it still happens.

TheOtherHobbes said 13 days ago:

Exactly. The way to improve performance on a lot of AI problems is to get past the human tendency to individualistic AI, where every AI implementation has to deal with reality all on its own.

As soon as you get experience-sharing - culture, as humans call it, but updateable in real time as fast as data networks allow - you can build an AI mesh that is aware of local driving conditions and learns all the specific local "map" features it experiences. And then generalises from those.

So instead of point-and-hope rule inference you get local learning of global invariants, modified by specific local exceptions which change in real time.

mkl said 13 days ago:

It seems to me that humans require and get orders of magnitude more training data than any existing machine learning system. High "frame rate", high resolution, wide angle, stereo, HDR input with key details focused on in the moment by a mobile and curious agent, automatically processed by neural networks developed by millions of years of evolution, every waking second for years on end, with everything important labelled and explained by already-trained systems. No collection of images can come close.

CreepGin said 13 days ago:

Depends on how you quantify data a human processes from birth to adulthood.

jack_pp said 13 days ago:

You're forgetting the million years of evolution

polishdude20 said 13 days ago:

But at the end of that video they state they were able to train a network to detect these phantom images. So this is something that can be fixed and had been proven to work. Only a matter of time before it's in commercial cars.

said 13 days ago:
[deleted]
tjchear said 13 days ago:

That same video said they trained a CNN to recognize phantoms using purely video feed and achieved a high accuracy with AUC ~ 0.99.

aj-4 said 13 days ago:

30%+ downvotes seems like there is not a consensus around this issue

dmode said 13 days ago:

I have a AP 2.5 Model 3. It will never be fully self driving. It still has trouble keeping lanes when the stripes are not simple. It still does phantom brakes

samstave said 13 days ago:

WRT F150;

I am so upset with the state of the auto market when it comes to pricing.

Manufacturing margins are enormous when it comes to cars.

The F150 is no different.

A two seater (effectively) vehicle stamped out of metal and plastic should never cost as much as those things do.

I hate car companies and their pricing models.

dlbucci said 13 days ago:

Look up the chicken tax bill that passed a few decades ago that basically stopped any foreign car manufacturers from selling pickups in the US. That's why trucks are so much more expensive than other types of cars.

gonzo41 said 13 days ago:

Also why you have hugh f-serious cars and not more reasonable sized cars like Hilux's

ethbro said 13 days ago:

Because small trucks require more fuel- and emissions-efficient engines than larger ones.

ponker said 13 days ago:

Ford's operating margin is ~8 percent it's not like they're making triple digit profit margins here. You are overreacting.

gpm said 13 days ago:

Web Assembly

It's interesting in a bunch of ways, and I think it might end up having a wider impact than anyone has really realized yet.

It's an ISA that looks set to be adopted in a pretty wide range of applications, web browsers, sandboxed and cross platform applications, embedded (into other programs) scripting, cryptocurrencies, and so on.

It looks like it's going to enable a wider variety of languages on the web, many more performant than the current ones. That's interesting on it's own, but not the main reason why I think the technology is interesting.

Both mobile devices, and crypto currencies, are places where hardware acceleration is a thing. If this is going to be a popular ISA in both of those, might we get chips whose native ISA is web assembly? Once we have hardware acceleration, do we see wasm chips running as CPUs someday in the not too distant future (CPU with an emphasis on Central)?

A lot of people seem excited about the potential for risc-v, and arm is gaining momentum against x86 to some extent, but to me wasm actually seems best placed to takeover as the dominant ISA.

Anyways, I doubt that thinking about this is going to have much direct impact on my life... this isn't something I feel any need to help along (or a change I feel the need to try and resist). It's just a technology that I think will be interesting to watch as the future unfolds.

duckfruit said 13 days ago:

I want to believe... I always thought WebAssembly had a lot of potential, however, in practice it doesn't seem to have turned out that way.

I remember the first Unity demos appearing on these orange pages at least 4 or 5 years ago, and promptly blowing me away. But, after an eternity in JavaScript years, I still dont know what the killer app is, technically or business wise. (Side note - I encourage people to prove me wrong, in fact I'd love to be! Thats whats so engaging about discussions here. I'd love to see examples of what WebAssembly makes possible that wouldn't exist without it.)

CrazyStat said 13 days ago:

I can tell you about a WebAssembly killer app for a small niche. lichess uses WebAssembly to run a state-of-the-art chess engine inside your browser to help you analyze games [1]. Anyone who wants to know what a super-human chess player thinks of their game can fire it up on their desktop, laptop, or even phone (not highly recommended, it's rough on your battery life).

Obviously very serious chess players will still want to install a database and engine(s) on their own computer, but for casual players who just occasionally want to check what they should have done on move eleven to avoid losing their knight it's a game changer.

[1] https://lichess.org/analysis

BigJono said 13 days ago:

I think chess.com has something similar too, but not sure if it's powered by wasm.

If it's not, I'd be interested to see a speed and feature comparison between the two.

julianeon said 13 days ago:

I think there might be killer apps that companies aren't publicizing, because it's part of their competitive advantage.

Example of WASM being used in a major product:

https://www.figma.com/blog/webassembly-cut-figmas-load-time-...

You can infer from this that it's making them 3x faster than anything a competitor can make, and probably inspired a lot of those 'Why is Figma so much more awesome than any comparable tool?' comments I remember reading on Twitter months back.

duckfruit said 13 days ago:

Agreed - Figma is a very good example. I stand corrected.

raihansaputra said 13 days ago:

I read that a few days ago and just realized why Figma runs better than Miro/RealTimeBoard. I wish the Miro team is also looking to port to WASM/boost performance. I don't think it's easy though; Figma's effort started in 2017.

runawaybottle said 13 days ago:

Adobe XD uses some Wasm also.

mbzi said 13 days ago:

An example I can give:

I use WebAssembly for a few cross-platform plugins. E.g. An AR 3D rendering engine in C++ and OpenGL. With very little effort it is working in browser. No bespoke code, same business logic, etc. Saved a lot of time vs creating a new renderer for our web app.

For me it allows a suite of curated plugins which work cross-platform. The web experience is nearly just as nice as the native mobile and desktop experience. This in turn increases market growth as more of my clients prefer web vs downloading an app (which is a large blocker for my users). I also enjoy the code reuse, maintainability, etc, :)

Another:

This year Max Factor (via Holition Beauty tech) won a Webby award for in-browser AI and AR. This was used to scan a users face, analyse their features, advise them on what make up, etc, would suit them, after which the user can try it on. This would have been impossible without WebAssembly.

This tech is also used by another makeup brands beauty advisors (via WebRTC) to call a customer and in real-time advise them on their make up look, etc.

Is this tech necessary? Probably not, but it is a lot nicer than having to go to a store. Especially when we are all in lockdown :)

1) https://www.holitionbeauty.com/

2) https://winners.webbyawards.com/?_ga=2.215422039.1334936414....

3) https://www.maxfactor.com/vmua/

tylerlarson said 13 days ago:

I build a slower version of something with the same idea 13-14 years ago in Flash for http://www.makeoversolutions.com which most of these makeup companies licensed back then.

I moved on from that a decade ago but it was a neat project at the time.

But I deployed my first integration of WASM about a month ago for PaperlessPost.com. It is a custom h264 video decoder that renders into a canvas that manages timing relative to other graphics layers over this video. This code works around a series of bugs we've found with the built in video player. It went smoothly enough that we are looking into a few other hot spots in our code that could also be improved with WASM.

One avenue for WASM might be simply polyfilling the features that are not consistently implemented across browsers.

mbzi said 11 days ago:

I feel like I am looking in a mirror!

Ten years ago I did the same but in Java and JOGL (before Apple banned OpenGL graphics within Java Applets embedded within a webpage). Was used for AR Watch try on within https://www.watchwarehouse.com and Ebay. The pain of Flash and Applets still wake me up at night.

I'm also building something very similar but with the ability for custom codecs (https://www.v-nova.com/ is very good). Probably the same issues too! Could I know more about your solution?

duckfruit said 13 days ago:

This is really great work and exactly the kind of response I was hoping for - thank you. I wonder why tech like this is not being more widely used, for example on Amazon product pages. Especially with the well known reluctance as you mentioned of people downloading apps.

mbzi said 11 days ago:

Thanks, much appreciated!

I think WebAssembly is more used than it appears, just difficult to see/tell.

A few years ago I actually tried integrating AR via WebAssembly with Amazon. We couldn't get the approval due to poor performance on Amazon fire devices (which have low end hardware). It is a shame but it is what it is.

What is disappointing/annoying is - as a CTO - it is near impossible to hire someone with WebAssembly skills. It requires an extra curious Engineer with a passion for native and web. Training is always important for a team but when going down the WebAssembly route you need to extra focused and invest more than what a typical Engineer would be allocated (E.g. Increase training from 1 day a week to 2-3). I suppose this may put people off?

paulgb said 13 days ago:

> I'd love to see examples of what WebAssembly makes possible that wouldn't exist without it.

I've been playing with WebAssembly lately and the moment where it clicked for me how powerful it was was building an in-browser crossword filler (https://crossword.paulbutler.org/). I didn't write a JS version for comparison, but a lot of the speed I got out of it was from doing zero memory allocation during the backtracking process. No matter how good JS optimization gets, that sort of control is out of the question.

I also think being able to target the browser from something other than JS is a big win. 4-5 years is a long time for JS, but not a long time for language tooling; I feel like we're just getting started here.

rraghur said 13 days ago:

Great work.. This is amazing! Thanks for sharing

duckfruit said 13 days ago:

This is brilliant, thank you!

jjcm said 13 days ago:

If you’re looking for a real world example of Webassembly being used in production at a large scale for performance gains, check out Figma. Their editor is all wasm based, and is really their secret sauce.

duckfruit said 13 days ago:

Thank you! I just checked them out, and I stand corrected. Really an excellent design tool and very responsive. I see now that for certain applications WASM is indeed the right tool for the job.

growlist said 13 days ago:

Speedy client-side coordinate conversion in geospatial apps, thus avoiding the round-trip to the server.

said 12 days ago:
[deleted]
lukevp said 13 days ago:

I agree! WASM is very interesting. Blazor is an exciting example of an application of Web Assembly - it's starting out as .net in the browser, but you can imagine a lightweight wasm version of the .net runtime could be used in a lot of places as a sandboxed runtime. The main .net runtime is not really meant to run unprivileged. It would be more like the UWP concept that MS made to sandbox apps for the windows App Store, but applicable to all OSes.

One thing I haven't heard much about is the packaging of wasm runtimes. For example, instead of including all of the .net runtime as scripts that need to be downloaded, we could have canonical releases of major libraries pre-installed in our browsers, and could even have the browser have pre-warmed runtimes ready to execute, in theory. So if we wanted to have a really fast startup time for .net, my browser could transparently cache a runtime. Basically like CDN references to JS files, but for entire language runtimes.

This would obviate the need for browsers to natively support language runtimes. It's conceptually a way to get back to something like Flash or SilverLight but with a super simple fallback that doesn't require any plugin to be installed.

catmanjan said 13 days ago:

I look forward to in browser DLL hell /s

I'm cautiously optimistic about blazor, it definitely makes streaming data to the Dom much easier

k__ said 13 days ago:

Blazor seems like the only one application of WASM at the moment that goes in the completely wrong direction.

People are already whining about JS bundle size and even the small .net runtimes are >60kb.

Yew on the other hand seems to fit right into what WebAssembly was made for.

rogihee said 13 days ago:

The download size does make it hard to use for a "public" site, like a webshop. But it is a different story for an application, like an intranet solution or app like Figma. A first time download of a few MB's is not a problem, as you regularly use it. Like a desktop application.

It is the first time you can develop a full stack application (client and backend) in one language in one debugging session. For C# that was possible with Silverlight.

Small companies (like mine) that deliver applications and have full stack engineers can have some amazing productivity!

So for my needs I'm really excited with something like Blazor, and this was only the first release.

k__ said 13 days ago:

I understand the appeal for .net devs.

I just don't think it's a good idea in general.

blackoil said 13 days ago:

For every person whining about 60k JS there are 10 creating 10MB web app.

als0 said 13 days ago:

Cautionary tale: we’ve been here before with JVM CPUs like Jazelle. They didn’t take over the world.

gpm said 13 days ago:

Absolutely, but there's been plenty of technologies where the time wasn't right the first time around, but it was the second, or third, or fourth.

bklaasen said 13 days ago:

See https://vintageapple.org/byte/ and search on the page for "Java chips" or download the PDF directly at https://vintageapple.org/byte/pdf/199611_Byte_Magazine_Vol_2...

I remember being really excited at the concept. Of /course/ we needed Java co-processors!

scarface74 said 13 days ago:

Even closer to home. Palm, RIM, Microsoft, Apple and Google have all said at one point that web apps were the answer for mobile apps....

ethbro said 13 days ago:

I mean, modern Google was half-built on the back of the Gmail web app...

scarface74 said 13 days ago:

Gmail was introduced after Google was already popular. The Google home page’s claim to fame was always its simplicity and fast load time.

ethbro said 13 days ago:

To an average user, Google in 2003 was a search page. In 2004+, it was essential internet infrastructure.

That's a pretty big difference.

scarface74 said 13 days ago:

Gmail is popular but in the grand scheme of things it’s not that popular for email. I’m sure that most people get most of their utility from email from their corporate email. Their personal email is mostly used for distant relationship type communications. Most personal interactions these days happen via messaging and social media. AKA “Email is for old people”.

Also, a lot of computer use is via mobile these days and I doubt too many people are using the web interface on mobile for gmail.

ethbro said 13 days ago:

It's pretty popular for email, at 25%+ market share [1]. That's a LOT of information to mine.

And point about conversations moving to post-email protocols, but email is certainly still up there with HTTP as a bedrock standard that everyone eventually touches.

Without pushing JavaScript and a full featured web client, it's fair to say Google wouldn't have grown as quickly and be nearly as dominant today.

As for their move to full mobile app, I think it's a bit of a different calculation when you happen to own the OS that powers ~75% of all mobile phones [2]. ;)

Suffice to say, I don't think Google has the same troubles as other developers. (Exception to security policy, for my first party app? Sure!)

[1] https://www.statista.com/chart/17570/most-popular-email-clie...

[2] https://www.statista.com/topics/3778/mobile-operating-system...

scarface74 said 12 days ago:

The question is not about how many people use Gmail - and that still doesn’t take into account corporate users. It’s about how many people use the web interface as opposed to using a mobile app.

ethbro said 11 days ago:

Yes. And we're both clear that there wasn't always a mobile app version of Gmail, right?

scarface74 said 11 days ago:

To say that Gmail had much to do with Google’s growth considering that there was only a relatively small Window that email was the most popular form of personal (not corporate communication and spam) and that was over ten years ago before mobile started taking over doesn’t really have any basis in today’s reality.

gilbetron said 13 days ago:

True, just like we were here before with devices like the Palm Pilot and Apple Newton, which is why the iPhone and IPad never took over the world ;)

hackcasual said 13 days ago:

I'd argue somewhat the opposite. Because WebAssembly is abstract but low level, it makes it really easy for a platform to optimize specifically for that platform, so instead of creating a need for specific platforms, it'll allow more diverse systems to run the same "native" blobs.

jariel said 13 days ago:

That potentiality has been there for many many years, I don't see 'the thing' that provides the critical mass necessary to make it work in reality.

Web Assembly is one of the more misunderstood technologies in terms of it's real, practical application.

At its core, it crunches numbers, in limited memory space. So this can provide some 'performance enhancements' possibly for running some kinds of algorithms. It means you can also write those in C/C++, or port them. Autodesk does this for some online viewers. This is actually a surprisingly narrow area of application and it still comes with a lot of complexity.

WA is a black box with no access to anything and how useful really is that?

Most of an app is drawing 'stuff' on the screen, storage, networking, user event management, fonts, image, videos - that's literally what apps are. The notion of adding 'black box for calculating stuff more quickly' is a major afterthought.

At the end of the day, JS keeps improving quite a lot and does pretty well, it might make more sense to have a variation of this that can be even more optimized than building something ground up.

WASI - the standard WA interface is a neat project, but I feel it may come along with some serious security headaches. Once you 'break out of the black box' ... well ... it's no longer a 'black box'.

WA will be a perennially interesting technology and maybe the best example of something that looks obviously useful but in reality isn't really. WA actually serves as a really great Product Manager's instructional example to articulate 'what things actually create value and why'.

It will be interesting to see how far we get with WASI.

cdcarter said 13 days ago:

I think you're underestimating WASI. Projects like cloudABI, where an existing app is compiled against a libc with strong sandboxing, really cool things happen.

jariel said 13 days ago:

Thanks but the same thing was said about WASM and ASM.JS.

For 5 years we've been hearing about how great they are, except nobody is really using them.

So now, it's 'the next thing' that will make it great? Except that next thing isn't there, not agreed upon or implemented, we don't know so many things about it?

Like I say, this is textbook example of tech-hype for things probably not as valuable as they appear.

If (huge if) WASI were 'great, functional, widespread, smoothly integrated' - I do agree there's more potential. But that this will really happen is questionable, and that even if it does happen, it will be valuable, is questionable.

entha_saava said 13 days ago:

I don't like to see wasm replacing native for stuff like development tooling, and desktop apps.

JITs may approach native performance in theory - but the battery consumption and memory consumption are not very good. ("Better than JS" is a low bar).

As hardware becomes stronger, I would like to do more with it, and when it comes to portable devices, I want more battery life. Nothing justifies compiling same code again and again, or downloading pages again and again like "web apps" shit.

I understand where developer productivity argument comes from. But we can have both efficiency and developer productivity - it is a problem with webshit-grade stacks that are used today that you can't have both.

I personally think flutter model is future. You need not strive for "build once - run anywhere". You can write once and build anywhere a cross platform HLL and that's better.

As for sandboxing, maybe it is that your OS sucks (I say this as linux user); Android / iOS have sandboxing with native code. You shouldn't need to waste energy and RAM for security. IMO enforcing W^X along with permission based sandboxing is better than webassembly bullshit that is pushed.

And webassembly itself seems to be a rudimentary project with over ambitious goals. JS bridge being so slow and not having GC support ("To be designed" state) make it unusable for many purpose. Outside HN echo chamber, not many web people want to write in rust or even C++.

mbzi said 11 days ago:

When dealing with Health or Military systems installing or updating a native application could result in months of delays (e.g. quarterly OS image update cycles). However running within Chrome, Firefox, and other typical software preinstalled, this becomes <days for implementation.

Without WebAssembly I wouldn't have been able to ship 2 products pro-bono within intensive care units and operating theatres directly helping with COVID.

I understand your dislike towards WebAssembly (albeit Web stack trends / flavour of the month esque development). I am not the largest fan of modern web development. Nevertheless love for WebAssembly is not due to developer productivity. After shipping 20+ WebAssembly products (alongside native counterparts) I am yet to meet an Engineer who enjoyed the WebAssembly/Emscripten/Blazor pipeline. However what WebAssembly has achieved for me is: Do people use your app? and within certain markets it allowed me to grow, do good, and say yes. This is the only real reason why someone should go down this route.

anewvillager said 13 days ago:

> I don't like to see wasm replacing native for stuff like development tooling, and desktop apps.

Wasm is like the JVM or CLR in that regard. It's not the future - it's the past.

entha_saava said 13 days ago:

Yes. Even if there were a number of dominant architectures, install-time compilation would be better than run-time compilation for frequently used software. The RAM and Energy overhead isn't just worth it.

said 13 days ago:
[deleted]
jfkebwjsbx said 13 days ago:

Wasm was not designed to be a hardware accelerated ISA. It was designed as an IL/bytecode target like JVM and .NET.

Even if it were, there is an extremely high bar to meet for actual new ISAs/cores. There is no chance for Wasm to compete with RISC-V, Arm or x86.

RedShift1 said 13 days ago:

Aaah so we have come full circle from Java applets.

chx said 13 days ago:

> It's an ISA that looks set to be adopted in a pretty wide range of applications, web browsers, sandboxed and cross platform applications, embedded (into other programs) scripting, cryptocurrencies,

Imagine if the crowd didn't fall for the HODL hypers and called these things cryptolotteries or something like that -- they are a betting game after all -- how ridiculous would it look to include them in every discussion like this.

Sargos said 13 days ago:

What are you adding to the discussion? This is a technical forum, the least you could do is comment on the use of Web Assembly in Ethereum or maybe anything of substance. There's a bunch of technically interesting topics to bring up but somehow I doubt you know anything about them.

chx said 13 days ago:

I speak up against cryptocurrency because it's a cancer. It's a hype adding to climate change without any real world use case whatsoever.

Sargos said 13 days ago:

Have you looked deeper than just hodl memes and Bitcoin? Ethereum is a highly technical project that doesn't really care about money and lots of people here on Hacker News find interesting topics regarding it. Web Assembly will be the base programming platform for example, which is one of the reasons he included it.

If you read about the Baseline protocol (EY, Microsoft, SAP etc building neutral interconnections between consortiums), ENS/IPFS, or digital identity systems you might find something that interests you and is more relevant than the mindless hodl ancaps. It's actually a pretty exciting field to be in as a computer scientist with almost no end of boundary pushing experiments and cryptographic primitives to play with and build on top of.

aryonoco said 13 days ago:

Thank you for your input, but thus is not TechCrunch. We understand the problems with PoW, and we also know that a lot of interesting research is being done on top of Ethereum. For your reference, Ethereum is moving away from PoW.

nostrademons said 13 days ago:

Most new cryptocurrencies are moving away from PoW because a.) it's a massive waste of electricity and b.) it's not actually secure anyway, because we've seen a consolidation of mining power with major ASIC customers who have cheap power costs (notably in China). Ethereum's moving to it in 2020 or 2021, and EOS, Stellar, Tezos, Cardano, etc. are already PoS or derivatives.

telotortium said 7 days ago:

Have the security issues with PoS been worked out yet?

autosage said 13 days ago:

Materialize https://materialize.io/ Incremental update/materialization of database views with joins and aggregates is super interesting. It enables listening to data changes, not just on a row level, but on a view level. It's an approach that may completely solve the problem of cache invalidation of relational data. Imagine a memcache server, except it now also guaranties consistency. In addition, being able to listen to changes could make live-data applications trivial to make, even with filters, joins, whatever.

Similarly, someone is developing a patch for postgres that implements incrementally updating/materializing views[1]. I haven't tried it so I can't speak of its performance or the state of the project, but according to the postgres wiki page on the subject [2] it seems to support some joins and aggregates, but probably not something that would be recommended for production use.

[1] https://www.postgresql-archive.org/Implementing-Incremental-... [2] https://wiki.postgresql.org/wiki/Incremental_View_Maintenanc...

jacobobryant said 13 days ago:

+1, very excited about this.

They're marketing it in the OLAP space right now, but at some point I'd like to try integrating it with a web framework I've been working on.[1][2] It'd be a more powerful version of firebase's real-time queries. Firebase's queries don't let you do joins; you basically can just filter over a single table at a time. So you have to listen to multiple queries and then join the results by hand on the frontend. Doesn't work if you're aggregating over a set of entities that's too large to send to the client (or that the client isn't authorized to see).

[1] https://findka.com/blog/migrating-to-biff/ [2] https://github.com/jacobobryant/biff

arjunnarayan said 13 days ago:

Thanks for the vote of confidence! One thing: We're not marketing it in the OLAP space. Our existing users very much are building new applications.

Initially we went for the metaphor of "what if you could keep complex SQL queries (e.g. 6-way joins and complex aggregations, the kinds of queries that today are essentially impossible outside a data warehouse) incrementally updated in your application within milliseconds? What would you build?

We're moving away from that metaphor because it seems it's more confusing than helpfuL. Tips always appreciated!

jacobobryant said 13 days ago:

Ah, thanks for the correction. In any case I'm looking forward to trying it out eventually--got a number of other things ahead in the queue though.

My suggestion would be consider comparing it to firebase queries. Firebase devs are already familiar with how incrementally updated queries can simplify application development a lot. But, despite Firebase's best marketing attempts, the queries are very restrictive compared to sql or datalog.

brightball said 13 days ago:

I’ve always wanted to take the time to try to build this. It’s been possible in PG for a while to use a foreign data wrapper to do something like directly update an external cache via trigger or pubsub it to something that can do it for you.

Making it easy here would be absolutely fascinating.

nojito said 13 days ago:

Very similiar to

https://www.datomic.com/

estebarb said 13 days ago:

Materialize is based on differential dataflow, that is based on timelly dataflow. The abstraction works like magic: distributed computation, ordering, consistency, storage, recalculation, invalidations... All those hard to since problems are handled naturally by the computing paradigm. Maybe the product is similar, but not the principles behind

nojito said 13 days ago:

Principles only matter to hackers, but the end result for end users is identical.

It’s just very unfortunate that materialize has a much much bigger marketing team than the datomic people.

dustingetz said 13 days ago:

Materialized is streaming, Datomic is poll.

blain_the_train said 13 days ago:

How are they close?

slow_donkey said 13 days ago:

This looks great - I've been looking into debezium for a similar idea but they don't natively support views which makes sense from a technical pov but is rather limiting. There's a few blog posts on attaching metadata/creating an aggregate table but it involves the application creating that data which seems backwards.

Would be huge if materialize supports this out the box. I believe it's a very useful middle ground between CRUD overwriting data and eventsourcing. I still want my source of truth to be a rdbms, but downstream services could use data stream instead

quodlibetor said 13 days ago:

This is exactly what we do! This is a walkthrough of connecting a db (these docs are for mysql, but postgres works and is almost identical) via debezium and defining views in materialize: https://materialize.io/docs/demos/business-intelligence/

iameoghan said 13 days ago:

That's super interesting. Will need to read a lot more about it though.

asattarmd said 13 days ago:

Doesn't Hasura or Postgraphile do this better? They give a GraphQL API over Postgres with support for subscriptions, along with authentication, authorization etc.

slow_donkey said 13 days ago:

You could shoehorn hasura for this usecase but those tools are primarily intended for frontend clients to subscribe to a schema you expose.

Change data capture allows you to stream database changes to a message bus or stream which has much better support for backend service requirements. Example: if a downstream service goes down, how would it retrieve the missed events from Hasura? Using Kafka or a buffered message bus, you'd be able to replay events to the service.

Nevermind having to support websockets in all your services :/

ninjachen said 11 days ago:

Cool! It's interesting!

alex7o said 13 days ago:

It looks similar to couchdb?

prrls said 13 days ago:

Oxide Computer Company

https://oxide.computer/

“True rack-scale design, bringing cloud hyperscale innovations around density, efficiency, cost, reliability, manageability, and security to everyone running on-premises compute infrastructure.”

Corey Quinn interviewed the founders on his podcast "Screaming in the Cloud", where they explain the need for innovation in that space.

https://www.lastweekinaws.com/podcast/screaming-in-the-cloud...

Basically, on-premises hardware is years behind what companies like Facebook and Google have in-house, it may be time to close that gap.

They also have a podcast, "On The Metal", which is such a joy to listen to. Their last episode with Jonathan Blow was really a treat.

https://oxide.computer/podcast/

It's mostly anecdotes about programming for the hardware-software interface, if that's your thing ;).

prrls said 13 days ago:

And for people wondering why caring about on-premises hosting when you have the cloud, a few weeks ago there was a thread about why would you do the former in favor of the latter. It puts on display that actually a lot of people are still on-premises, and for good reasons, which makes a good case for a company like Oxide to exist.

https://news.ycombinator.com/item?id=23089999

input_sh said 13 days ago:

Also see this meta comment which summed up other top-level comments by their arguments: https://news.ycombinator.com/item?id=23098654

8/10 is cost related.

said 13 days ago:
[deleted]
carterklein13 said 13 days ago:

Wow, I had never heard of Oxide before this. I work at a huge company that is nearly finished their cloud transformation, which was frankly largely a way to differentiate themselves form their competition more than anything, and a huge cost sink.

This probably would've accomplished the same goal, with a lot less overhead.

kimburgess said 13 days ago:

I’d second that podcast recommendation. The episode with Jon Masters is an incredible conversation.

iamwil said 13 days ago:

Rust lang - Memory safety through zero cost abstraction as a way to eliminate a large class of errors in systems languages is interesting. Especially if it allows more people to write systems programs.

WASM - Mostly as a compile target for Rust, but I think this changes the way software might be deployed. No longer as a website, but as a binary distributed across CDNs.

ZK-SNARKS - Zero knowledge proofs are still nascent, but being able to prove you know something while not revealing what it is has specific applicability for outsourcing computation. It's a dream to replace cloud computing as we know it today.

Lightning Network - A way to do micropayments, if it works, will be pretty interesting.

BERT - Newer models for NLP are always interesting because the internet is full of text.

RoamResearch - The technology for this has been around for a while, but it got put together in a interesting way.

Oculus Quest - Been selling out during COVID. I sense a behavioral change.

Datomic - Datalog seems to be having a resurgence. I wonder if it can fight against the tide of editing in-place.

azureus said 13 days ago:

Datomic .. not just because of datalog, but because its hands down the best implementation of a AWS lambda based workflow I've seen (Datomic Ions). It's such a peach to work with.

Gollapalli said 13 days ago:

wrt to Datomic there's also another Clojure DB using Datalog called Crux that's pretty interesting. I built my most recent project on that.

pot8n said 13 days ago:

Rust is awesome and very eye opening and it's a great alternative for almost any Golang use case, I just hope they prioritize enhancing compilation times if possible.

chx said 13 days ago:

> Lightning Network - A way to do micropayments, if it works,

You can stop the tape right there. You know it doesn't and it can't.

sosodev said 13 days ago:

Genuinely curious, what’s wrong with the lightning network?

dane-pgp said 13 days ago:

I don't know why the parent comment talked in such absolute terms, but these recent problems may be relevant:

https://news.bitcoin.com/hidden-lightning-network-bug-allowe...

https://news.bitcoin.com/mishap-sees-user-lose-30000-btc-on-...

CraigRood said 13 days ago:

Bitcoin.com isn't a neutral source on LN related material. The parent company (St Bitts LLC) directly invest in Bitcoin Cash startups that compete directly with Bitcoin itself.

The bug has already been patched, and had a limited userbase. The user who supposedly lost all his Bitcoin ended up not being true, vast majority was recovered. It's also worth noting that the user also deliberately went against various UI warnings that funds may be lost.

https://github.com/lightningnetwork/lnd/issues/2468

companyhen said 12 days ago:

Linking to a Bitcoin.com article about anything BTC is like linking to a Fox News opinion article on Obama.

dylkil said 13 days ago:

For starters the whitepaper concludes that a 133mb base block size is needed for it to work at scale. Bitcoin currently has a 1mb block size limit, which it will never increase.

chx said 13 days ago:

It's not the lightning network -- it's micropayments.

Three days ago: https://news.ycombinator.com/item?id=23232978

guildmaster said 12 days ago:

> RoamResearch - The technology for this has been around for a while, but it got put together in a interesting way.

Just checked out the website, how is it any different from Dynalist or Workflowy?

iamwil said 8 days ago:

Never tried dynalist. Used Workflowy.

Workflowy is strictly an outliner. It's like Gopher--hierarchical, unlike the Web, which is a graph.

Feature-wise, Roam is more like a graph. You can really easily link to other concepts, and rename pages (and everything renames). It also has a page generated daily, for things you want to write down.

Feeling wise, you get to write things and collect them and organize them later. I think it's more condusive to how people think and research. You might have a piece of data, but you're not sure where to put it yet. Most other note taking systems forces you to categorize first.

dmak said 12 days ago:

I’m surprised people are still looking forward to the Lightning Network. Layer 2 has missed the boat because of all the politics and contention between the Bitcoin factions. Decentralized finance is already happening on Ethereum. We have stable coins like Dai that underpins loans.

fsflover said 12 days ago:

> It's a dream to replace cloud computing as we know it today.

Perhaps you may be interested in Golem project devoted to distributed computing: https://golem.network/

baby said 13 days ago:

btw since two weeks ago the official Oculus Quest store is not sold out anymore (although it might be sold out again, haven't checked since it got back in store)

koeng said 13 days ago:

Oxford nanopore sequencing. If a few problems can be figured out (mainly around machine learning and protein design), then it will beat every other biological detection, diagnosis, and sequencing method by a massive amount (no 10x, but more like 100x-1000x)

It's hard to explain how big nanopore sequencing is if a few (hard) kinks can be figured out. Basically, it has the potential to completely democratize DNA sequencing.

Here is an explanation of the technology - https://www.youtube.com/watch?v=CGWZvHIi3i0

dhash said 13 days ago:

Best part is the Oxford devices are _actually affordable_. Illumina has had such a stranglehold on the market - devices start at around 35k and go up into “this is a house now” territory. Meanwhile the Flongle [0] is $99 and the main Oxford device can be had for $1k.

[0] https://store.nanoporetech.com/us/flowcells/flongle-flow-cel...

bsder said 13 days ago:

> Illumina has had such a stranglehold on the market - devices start at around 35k and go up into “this is a house now” territory.

You cannot effectively sell this kind of device under $25K--support costs simply eat your profit margin.

This is a constant across industries. You either have a $250 thneed (and you ignore your customers) or a $25K thneed (and you have shitty customer support) or a $250K thneed (and you have decent customer support).

zmmmmm said 13 days ago:

Depends what you mean by affordable - low barrier to entry, yes. But bases / $ is still orders of magnitude below where needed to displace Illumina for sequencing of large genomes (eg: human).

thewarrior said 13 days ago:

Can this be used to make faster corona virus tests ? If so maybe this is the time to Manhattan project this technology.

koeng said 13 days ago:

Generally, yes absolutely. I’ve been doing a project called “NanoSavSeq” (Nanopore Saliva Sequencing) in my free time. It’s published on dat right now since the raw files for Nanopore are really big (got too big for hashbase). There is one company doing it as well, but my version is completely open source and I’ve optimized it for affordable automation.

To give you a sense, you can buy one for 1k and do as much detection as a 64k device, and it’s small enough to fit in a backpack. One device should be able to do 500-1000 tests per 24hrs at a cost of about $10 per test, not including labor.

Gatsky said 13 days ago:

Is this with multiplexing? Or are you extending the flowcell life?

koeng said 13 days ago:

Multiplexing. I use barcoded primers to amplify the sample, then pool and sequence

iameoghan said 13 days ago:

Would love to know more. This is fascinating.

koeng said 13 days ago:

The dat website is at dat://aaca379867bff648f454337f36a65c8239f2437538f2a4e0b4b5eb389ea0caff

You can visit with the beaker browser, or share it through dat so it won't ever go down.

You can also visit it at http://www.nanosavseq.com/ (DNS is not up yet, http://167.172.195.83/book/index.html is direct)

It's embarrassingly barren right now, mainly since I've encountered some big problems with getting my DNA quantifier out of storage to start doing a lot more experiments. I'm getting that on Tuesday, so will be updating site then.

unixhero said 12 days ago:

The book / documentation is very clean and presented in a fantastic way. May I ask what engine you are using for presenting this book?

koeng said 10 days ago:

mdbook! By the folks making the Rust docs. I love their formatting.

azureus said 11 days ago:

Would you like to work together on this? This is very interesting stuff.

koeng said 10 days ago:

Would love to. Feel free to email me at koeng101<at>gmail.

a_bonobo said 13 days ago:

The Oxford Nanopore people announced that they are in the 'advanced stages' of developing their own Covid-19 test called LamPORE

https://twitter.com/nanopore/status/1263711292868694021

Press release: https://nanoporetech.com/about-us/news/oxford-nanopore-techn...

'Oxford Nanopore is planning to deploy LamPORE for COVID-19 in a regulated setting initially on GridION and soon after on the portable MinION Mk1C.'

The GridION is still expensive and not affordable for a business or private person, a MinION definitely is.

koeng said 13 days ago:

Thanks for those links! I knew it was only a matter of time

There are lots of folks working on LAMP in the DIYbio community. The kinda cool thing is that you can just have a colormetric read-out, so you don't even need Nanopore sequencing. I'm guessing that the reason Nanopore is nice there is to eliminate false positives. I'm more a fan of this approach -

https://www.genomeweb.com/business-news/clear-labs-raises-18...

Because you can recover full genomes as a by-product of diagnostic tests (which is useful for tracing infection, for example https://nextstrain.org/)

said 13 days ago:
[deleted]
bionhoward said 13 days ago:

Hell yes

said 13 days ago:
[deleted]
abdullahkhalids said 13 days ago:

Libresilicon [1]. Extremely important to our freedoms from corporate and state tyranny to make chip manufacturing libre.

> We develop a free (as in freedom, not as in free of charge) and open source semiconductor manufacturing process standard, including a full mixed signal PDK, and provide a quick, easy and inexpensive way for manufacturing. No NDAs will be required anywhere to get started, making it possible to build the designs in your basement if you wish so. We are aiming to revolutionize the market by breaking through the monopoly of proprietary closed source manufacturers!

[1] https://libresilicon.com/

Vinceo said 13 days ago:

This is really really exciting. Thanks

abdullahkhalids said 13 days ago:

It is. They are down to 1 um, so they need to make a bit more than an order of magnitude improvement to become performance competitive - 100 nm is early 2000s technology, Raspberry Pi is 40 nm size.

rmason said 14 days ago:

1. Cloudflare Workers, I don't have the bandwidth to experiment with it right now but it interests me greatly.

https://workers.cloudflare.com/

2. Rust - definitely will be the next language I learn. Sadly the coronavirus cancelled a series of meetings in Michigan promising to give a gentle introduction to Rust.

https://www.rust-lang.org/learn

cxam said 13 days ago:

Cloudflare Workers and their KV service (https://www.cloudflare.com/products/workers-kv/) is great. I built a side project (https://tagx.io/) completely on their services from hosting the static site, auth flow and application data storage. KV is pretty cheap as well starting around $5/month with reasonable resources.

rishav_sharan said 13 days ago:

I am also very interested in using worker + kv. Can kv be used as a proper application database? Has anyone ever done that?

cxam said 13 days ago:

If by application database you mean to the level of an RDBMS then no. It's a key-value data store. You get your CRUD operations, expiry, key listing and the ability to do pagination with keys always returning in lexicographically sorted order. Any relation data you want to have would be using the key prefixes, e.g.:

  article:1 -> {"user": 1, "content": "..."}
  user:1 -> {"name": "username"}
  user:1:articles -> [1, ...]
rishav_sharan said 13 days ago:

yep. thats the plan. i will have keys like

postid:someid/text

postid:someid/author

etc.

the relation aspect doesnt daunts me. As long as I can have a list/collection as a value, i can define a working schema.

I am more worried about if its costlier than normal dbs. and if there are any other gotchas to keep in mind as kv workers have scant documentation.

cxam said 13 days ago:

If you're comparing this to a normal DB, the biggest worry should be that it's not ACID compliant. Definitely something to consider if your data is important. The limitations for KV is listed here: https://developers.cloudflare.com/workers/about/limits#kv

You should also consider how you intend to backup the data as there currently isn't a process to do that outside of writing something yourself to periodically download the keys. This will add to your usage cost depending on what your strategy is, for example, backing up old keys that gets updated vs only new keys by keeping track of the cursor.

iameoghan said 13 days ago:

I've seen tagx a couple of times before. Awesome to know who the author is.

iameoghan said 14 days ago:

I've heard many good things about Cloudfare Workers.

Excuse my ignorance & N00Bness, but are they essentially a Cloudfare version of AWS Lambdas, Google Cloud Functions and Netlify functions, or are they something different/better?

closeparen said 13 days ago:

IIRC Cloudflare Workers run at each Cloudflare PoP, which have higher geographical density than AWS regions, so latency experienced by end-users may be lower.

VWWHFSfQ said 13 days ago:

AWS has the same thing with Lambda@Edge

ijpsud said 11 days ago:

According to this[0] blog post Lambda@Edge has significantly longer latency (due in part to a smaller number of locations). Cloudflare also uses V8 isolates instead of whole VMs, so much lower overhead. Disadvantage is that you can only run JavaScript and WASM.

[0]: https://blog.cloudflare.com/cloud-computing-without-containe...

iameoghan said 13 days ago:

Nice. Will check them out. IIRC they are really affordable too (like all serverless stuff tbh)

thegagne said 13 days ago:

More lightweight. It’s just v8 so there’s basically no time for warm up time.

They have vastly more pops than Amazon, so global performance for these is on a different level. But they are also more limited in compute and serve a slightly different purpose.

stickfigure said 13 days ago:

I've done a couple neat (IMO) things with CF workers.

- I use imgix to manipulate images in my app, but some of my users don't want anyone to be able to discover (and steal) the source images. Imgix can't do this natively; all image manipulation instructions are in the URL. So I put a CF worker in front of imgix; my app encrypts the url, the worker decrypts it and proxies.

- A year ago, intercom.io didn't support permissions on their KB articles system. I like intercom's articles but (at the time) wanted to restrict them to actual customers. So I put a CF worker in front that gates based on a cookie set by my app.

These are both trivial, stateless 5-line scripts. I like that I can use CF workers to fundamentally change the behavior of hosted services I rely on. It's almost like being able to edit their code.

Of course, this only works for hosted services that work with custom domains.

ignoramous said 13 days ago:

> I like intercom's articles but (at the time) wanted to restrict them to actual customers. So I put a CF worker in front that gates based on a cookie set by my app.

Might be against their terms? I rem someone asked if they could treat Workers as a http reverse-proxy to essentially bypass restrictions, and the answer was "no".

stickfigure said 13 days ago:

Seems unlikely. But if they really want to lose paying customers, that would be one way of doing it.

theturtletalks said 13 days ago:

It’s pretty incredible. You can put it over any site and build your own A/B testing or targeting based on the user or the link used to get to your site.

iameoghan said 13 days ago:

>These are both trivial, stateless 5-line scripts

Would it be possible to share these scripts? I would love to see them, they sound really helpful/useful

stickfigure said 12 days ago:

Sure. Here's the help system one (no longer used since intercom now supports permissions, and I opened up the help system anyway):

    addEventListener('fetch', event => {
       event.respondWith(handleRequest(event.request))
     })

    async function handleRequest(request) {
       const cookie = request.headers.get('Cookie');
       if (cookie && cookie.includes('foo=bar')) {
         return await fetch(request);
       } else {
         return new Response('You must log in before accessing this content');
       }
     }
The encrypted URL script is actually a bit longer than "5 lines" (it has been a while) so here's a gist:

https://gist.github.com/stickfigure/af592b1ce7f888c5b8a4efbe...

sdan said 13 days ago:

Utilized workers to create one of the fastest website analytics tool after Google Analytics: https://rapidanalytics.io (still in development).

XCSme said 13 days ago:

Fast, as in what sense? The tracking code loads fast? The tracking requests are sent fast to the server? The dashboards load fast?

maxencecornet said 13 days ago:

At my company, we run 100% of our front-end React/Vue web apps on Cloudflare workers - We love it, deploys are really easy, performance/resilience is built-in

mrfusion said 13 days ago:

Why rust over go?

ncmncm said 12 days ago:

Different use cases.

Go was designed to be easy to use for un-demanding problems. People mostly switch to Go from Ruby, Python, or Java; or from C in places where C was an unfortunate choice to begin with.

Rust is gunning for C++ territory, in places where the greater expressiveness of C++ is not needed or, in cases, not welcome. They would like to displace C (and the world would be a better place if that happened) but people still using C at this late date will typically be the last to adopt Rust.

mrfusion said 12 days ago:

So you’re saying it should have better performance than go?

ncmncm said 12 days ago:

I am mainly saying it is suitable for solving harder problems than Go. Most problems are not hard; Go is good enough for them, and easier to learn and use well.

All these languages are Turing-complete. The difference is in how much work it is to design and write the program, and in whether it can satisfy performance needs.

C++ wins here by being more expressive, making it better able to implement libraries that can be used in all cases. Rust is less capable, but stronger than other mainstream languages.

ncmncm said 12 days ago:

Sometimes. Better control of performance.

mrfusion said 12 days ago:

Easier to search for if nothing else.

aabajian said 13 days ago:

Geometric algebra: https://www.youtube.com/watch?v=tX4H_ctggYo

It makes a lot of hard physics problems (Maxwell's equations, relativity theory, quantum mechanics) much more understandable and (I'm told) unifies them in a common framework. I think it will help your average developer become comfortable with these areas of physics.

adamnemecek said 13 days ago:

The speaker in the video runs a community for people interested in geometric algebra. https://bivector.net/

Check out the demo https://observablehq.com/@enkimute/animated-orbits

Job the discord https://discord.gg/vGY6pPk

bmcahren said 13 days ago:

I can't wait to read into this. Switching formulas to tau was incredibly useful for me when I was doing a lot of 3D math for game dev.

elevenoh said 13 days ago:

Is all math/logic most fundamentally geometry?

avmich said 13 days ago:

Don't think so. Geometry requires space, which has certain features which constrain its properties (sorry for a tautology). If you avoid such constraints, you can still have math, but it doesn't make sense to call it geometry.

Looks a bit surprising math definition include concept of space. Geometry looks underappreciated, yes, but to replace the whole math...

ianai said 13 days ago:

To me, the answer here is “kinda but no”. Math in its most basic level is about studying logical connections. Sometimes being able to symbolize something in notation allows inspection of logical objects otherwise unobservable - like higher dimensional objects. But there’s the whole area of mathematical logic. I think I can say Godels incompleteness/inconsistency theorem was only about axiomatic systems with no necessary connection to geometry. Mathematics loves to study logical reductions of things and geometry can certainly be left out through reductions.

There’s something of a geometry/algebra separation in mathematics, too. The last few centuries (?) have tended toward algebraic research at the exclusion of geometry. There’s even reason to believe the two types of reasoning are separated in human brains in so far as people tend to be good at one and less good at the other.

carapace said 13 days ago:

Ah, but you can't encode math except for in some necessarily geometric form.

mkl said 13 days ago:

Are you referring to written notation? Calling that geometry is a bit of a stretch. There's also nothing geometric about maths encoded in computer code, or many types of mathematical thoughts, so I think you are just incorrect.

carapace said 13 days ago:

> Are you referring to written notation? Calling that geometry is a bit of a stretch.

Can you write without shape?

> There's also nothing geometric about maths encoded in computer code

Look at a computer chip under a microscope: nothing but geometry.

> or many types of mathematical thoughts

In re: math itself, perhaps there is such a thing as a mathematics of the formless (I doubt it but cannot rule it out) but to communicate it you are again reduced to some symbolic form.

> so I think you are just incorrect.

I've been thinking about this for a long time, and I'm still not 101% convinced, but I think it's true: you can't have information without form.

Check out "The Markable Mark" and "My Stroke of Insight". The act of distinction is the foundation of the whole of symbolic thought, and it is intrinsically a geometric act.

http://www.markability.net

> ... what is to be found in these pages is a reworking of material from the book Laws of Form.

> Think of these pages, if you like, as a study in origination; where I am thinking of 'origin' not in the historical sense but as something more like the timeless grounding of one idea on or in another.

Distinction is a physiological thing the brain does. It can e.g. be "turned off" by physical damage to the brain:

https://www.ted.com/talks/jill_bolte_taylor_my_stroke_of_ins...

https://en.wikipedia.org/wiki/My_Stroke_of_Insight

> Dr. Jill Bolte Taylor ... tells of her experience in 1996 of having a stroke in her left hemisphere and how the human brain creates our perception of reality and includes tips about how Dr. Taylor rebuilt her own brain from the inside out.

So whether you come at it from the mystical realm of pure thought or the gooey realm of living brains all math is geometric. (As far as I can tell with my gooey brain.)

Cheers!

avmich said 13 days ago:

> and it is intrinsically a geometric act.

Why? Can't you have distinction without geometry? It's not only position which can be distinct, you can have other properties.

Two digits in different position on paper can be both different - 0 and 1 - and the same - 5 and 5. You can encode them not by shape, but, say, by kind of particle?

And in general, our physical world has space - but how would you prove a world without space as we understand it can't have math?

carapace said 13 days ago:

> Why? Can't you have distinction without geometry?

Maybe but I don't see how.

> It's not only position which can be distinct, you can have other properties.

Properties like what? Color, sound, temperature, etc., all of these are geometric, no? Can you think of a concrete physical property that doesn't reduce to some kind of geometry?

> You can encode them not by shape, but, say, by kind of particle?

Sure, but then that particle must have some distinction from every other particle, either intrinsic or extrinsic (in relation to other particles), no?

Any sort of real-world distinction-making device has to have form, so that eliminates real non-geometric distinctions.

It may be possible to imagine a formless symbol but I've tried and I can't do it.

The experience of Dr. Taylor indicates to me that the brain constructs the subjective experience of symbolic distinction. (Watching her talk from an epistemological POV is really fascinating!)

So that only leaves some kind of mystic realm of formless, uh, "things". My experience has convinced me that "the formless" is both real and non-symbolic, however by the very nature of the thing I can't symbolize this knowledge.

    In the Beginning was the Void
    And the Void was without Form
If you can come up with a counter-example I would stand amazed. Cheers!
avmich said 13 days ago:

> Can you think of a concrete physical property that doesn't reduce to some kind of geometry?

How would you reduce charge to geometry? Or spin?

Can we differentiate by space the electrons in an atom of helium?

But we sort of digress. The question was if a concept of space is required to a concept of math, and specifically, if we can have distinction without space. Surely we can at least think of distinction without space, even if we'd fail to present that in our physical world?

said 13 days ago:
[deleted]
modeless said 13 days ago:

1. https://www.starlink.com/ Finally, truly global and low latency satellite internet.

2. Generative models for video games - https://aidungeon.io/ is barely scratching the surface. Story, art, animation, music, gameplay, it will all be generated by models in the future.

3. New direct drive robotics actuators such as https://www.google.com/search?q=peano-hasel+actuators I think actuators are holding robotics back more than software now. Breakthroughs are needed. No general purpose robot will ever be practical with electric motors and gearboxes.

4. Self-driving cars are still happening, despite delays. I think discounting Tesla's approach is a mistake, but Waymo is still in the lead.

5. NLP is finally starting to work. The potential for automation is huge. Code generation is very exciting as well.

6. I was excited for Rust but I now believe it's too complex. I'm looking for a much simpler language that can still achieve the holy grail of memory safety without GC pauses or refcounting. But I'm not holding my breath. If ML models start writing a large part of our code then the human ergonomics of programming language design will matter less.

losthobbies said 13 days ago:

Jai? Jonathan Blow’s new programming language might be an option for you.

https://inductive.no/jai/

AsyncAwait said 13 days ago:

Jai doesn't do very much in terms of memory safety, Zig [1] might be a better alternative + it actually exists.

1 - https://github.com/ziglang/zig

littlestymaar said 13 days ago:

What does Zig offers regarding memory safety? Isn't pointer manipulation as unsafe as C in Zig?

AsyncAwait said 13 days ago:

For example [1] & [2], with more being worked on. Now, Rust is king when it comes to memory safety, especially compile-time, and is miles ahead of anyone else, (not counting research languages), but Jai isn't really being designed to have much emphasis on memory safety, so am not sure it's fair to propose it as a Rust alternative if you're looking at memory safety.

1 - https://andrewkelley.me/post/unsafe-zig-safer-than-unsafe-ru...

2 - https://ziglang.org/#Performance-and-Safety-Choose-Two

guildmaster said 12 days ago:

Would love to read up on the advancements in NLP. Can you share some links?

slyall said 13 days ago:

Sidewalk delivery robots.

The problem is a lot easier that driverless cars (everything is slower and a remote human can take over in hard places) and huge potential to shake up the short-to-medium distance delivery business. It's the sort of tech that could quickly explode into 100s of cities worldwide like escooters did a couple of years ago.

Starship Technologies is the best known company in the area and furthest advanced. https://www.starship.xyz/

frellus said 13 days ago:

Yesterday, while driving in downtown Mountain View, CA, one of these damn things stopped short of coming into the crosswalk. So I and the opposite direction person stopped, like we would if it were a person.

The damn thing made us wait for what felt like an eternity. And it still didn't move. So I started to roll forward and I swear to you, I was almost hoping I would hear the crunch of electronics if the thing had decided to roll forward given I had given up on it.

A year ago I was in a Bevmo trying to get beer and an inventory robot was in the isle. The thing was bulky (morbidly obese?) and I couldn't get by it as it was rolling slowly up the isle taking pictures on both sides, so I went down the parallel isle hoping to get infront of it to get to my beer. Nope, the thing got there first and blocked me.

Robots are our future. And it will be annoying during our lifetime. There was a reason Han Solo snapped at C3PO to shut up. I don't know what Han has had to deal with in his lifetime, but I can take some guess now on where his "shoot first" mentality came from.

Ozzie_osman said 13 days ago:

We've been ordering through them once every couple weeks during the pandemic. It's really cool. Even though the robot itself is really slow (takes a good 40 mins for a 1 mile journey), they're usually pretty available and responsive, and so we'd get things faster than human-based platforms (who has to be available, then go to the pick up point, then deliver).

therealcamino said 13 days ago:

It seems like if they get popular they're going to run into problems with sidewalk availability. We're already using them for walking. You can add a few robots and not have any blowback, but once the novelty wears off, having to navigate around slowpoke robots on your walk is going to get old.

Eridrus said 13 days ago:

Cities are not immutable objects. It's going to be extremely contentious, because all local politics is, but it's not infeasible to alter our cities.

jibolso said 13 days ago:

Seen a bunch of these doing deliveries within Milton Keynes.

bubba1236 said 13 days ago:

Amazon scout is way ahead of them in technology

skmurphy said 13 days ago:

Optical Coherence Tomography (OCT) occupies a intermediate position in accuracy / skin depth for soft tissue between ultrasound and MRI

Optically pumped magnetometers (OPM) approaches SQUID level accuracy without need for supercooled device, can be worn or used as a contact sensor like ultrasound.

LoRA long range (10km +) low power sub-gigahertz radio frequency protocol useful for battery powered IoT devices transmitting small amounts of data.

Heat cabinet for infectious diseases, an old technology used to fight polio and other diseases that went out of favor with introduction of antibiotics. May find utility against novel viral infections.

UV light treatment of blood. Another old technology that may find use against novel infectious agents. Stimulates immune system to fight viral infections.

Balgair said 13 days ago:

Oh man, I used to research in OCT for deep brain stimulation! It's pretty cool tech, that is for sure. It's got a huge market for bio applications and certain industrials.

That said, optics is a super finicky field. You can come in and get a Nobel for 5 hours work, or you can spend 50 years in a dark room trying to get things together. Alignment is crazy difficult, thought it seems it shouldn't be.

Anyone that wants to dive into optics: Just do it for 2 years, no more.

jl2718 said 13 days ago:

Alignment should be done system-wide by orders of magnitude. If you are on a breadboard, get everything to within a cm of final location, then everything within a mm, etc. Don’t ever spend more than 1 minute at a time on any component. This stuff was not in the textbooks.

Balgair said 13 days ago:

It's especially hard with OCT as it's in the IR spectrum. You just have to go on your meters alone. It takes forever.

duncanawoods said 13 days ago:

Tell me more about getting a Nobel in 5 hours. I’ll buy your ebook :)

ncmncm said 12 days ago:

Meaning, if you don't get there in two years, you won't?

specialist said 13 days ago:

Neat list. Thanks.

I have chronic graft-versus host disease. Side effect of a bone marrow transplant. Mostly effects my skin, which changes color, gets thinner, and in advances stages hardens (aka marbleization).

GVHD is wicked hard to diagnose, monitor. Skin biopsies and normal digital photos.

I've asked my misc doctors (at FHCRC, SCCA, Univ of Wash) over the years about using UV to better diagnose skin conditions.

Now I'm wondering if OCT could also be helpful, perhaps for assessing the scarring.

skmurphy said 13 days ago:

OCT is a recognized diagnostic modality in dermatology and worth discussing with your doctors. Here are some references:

https://www.mdedge.com/dermatology/article/146053/melanoma/o...

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5946785/

http://www.octnews.org/category/5/dermatology/

and here is a clinic that talks about using it to assess skin: https://dermnetnz.org/topics/optical-coherence-tomography/

said 8 days ago:
[deleted]
lvs said 13 days ago:

> UV light treatment of blood.

What? No... No don't do this. This is a discarded idea from the era before molecular biology, and it was discarded for very good reason.

skmurphy said 13 days ago:

Opportunity can come from ideas that are correct but not generally accepted as correct.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4783265/

If it were to work it would be a useful new modality. I am not promoting it, but it's on my "watch list" due to efforts by AYTU at Cedars Sinai.

bearsnowstorm said 13 days ago:

In my experience the commonest current mainstream use of this is for Sezary Syndrome / mycosis fungoides (cutaneous T-cell lymphoma). See for example:

https://my.clevelandclinic.org/health/articles/8493-mycosis-...

lvs said 13 days ago:

Opportunity to give people leukemia. These ideas are at a prehistoric level of biology. We're way beyond this silliness now.

skmurphy said 13 days ago:

I am talking about work being done in clinical trials at reputable medical clinics. They may be mistaken but I don't think it's "silliness." Here is a recent clinical trial evaluating UVBI https://www.tandfonline.com/doi/full/10.1080/2331205X.2019.1...

Of course there are many other mainstream treatments that came from somewhat oddball ideas: Sister Kenney's treatments for polio, the Nobel prize winning discovery by Barry Marshall and Robin Warren that ulcers were caused by bacteria (H. Pylori), the use of leaches for treatment of venous congestion after surgery, and the use of maggots for wound debridement.

lvs said 13 days ago:

It should never have been signed off on by an IRB. It is irresponsible and horrific that this has been trialed on people in this century.

skmurphy said 13 days ago:

I suspect this may be a case of "the dose makes the poison."

You may be generalizing from a specific experience or specific experiment and rejecting a modality that may have significant efficacy.

It's hard to tell what you are basing your assertions on because you offer no specifics. My "watch list" interest is based on the number of positive experimental results and ongoing investigations of the technique.

ianai said 13 days ago:

Any specific objections to the research links provided?

bjourne said 13 days ago:

Velomobiles! A velomobile is a recumbent bike with fairing which enables them to be more convenient and much faster than a regular bike. A fit rider can easily overtake the peloton in Tour de France (https://www.youtube.com/watch?v=UBb7YIRcBe0). The velomobile in the clip is a standard model and there are racing models that are faster still!

Just like with regular bikes, you can add electric assist to them to extend their range and make the job of the rider easier. In this clip (https://www.youtube.com/watch?v=OCo4cRQMBlo) the rider gets an average speed on 37.5 km/h (top speed 84 km/h) over a distance of 73 km with over half the battery remaining. And that is without wearing the racing hoodie which significantly reduces drag.

The main problem with velomobiles is that they are expensive. The frame is made from carbon fiber and needs to be handcrafted. So the price ranges from about €5000 - €10000 which is too expensive to most. If some Chinese giant or billionaire investor set out to mass produce velomobiles I'm sure they could totally revolutionize transportation.

nostrademons said 13 days ago:

GPGPU. GPU performance is still increasing along Moore's Law, single-core performance has plateaued. The implication is that at some point, the differential will become so great that we'll be stupid to continue running anything other than simple housekeeping tasks on the CPU. There's a lot of capital investment that'd need to happen for that transition - we basically need to throw out much of what we've learned about algorithm design over the past 50 years and learn new parallel algorithms - but that's part of what makes it exciting.

nabusman said 13 days ago:

Sounds interesting, what language is best positioned for GPGPU's?

kekeblom said 13 days ago:

C++ through CUDA is by far the most popular option. There is some support in other languages but the support and ecosystem is far from what exists for CUDA and c++.

lmeyerov said 13 days ago:

Python via RAPIDS.ai . There first bc most data science community for prod + scale is in it. It feels like the early days Hadoop and Spark.

IMO golang and JS are both better technical fits (go for parallel concurrency and js for concurrency/V8/typed arrays/wasm), and we got close via Apache arrow libs, but will be a year or two more for them as a core supporter is needed and we had to stop the JS side after we wrote arrow. Python side is exploding so now just a matter of time.

gavinray said 14 days ago:

I wrote a guide on connecting Hasura + Forest admin for no-code SaaS apps + admin backends:

"14. Connect Forest Admin to Hasura & Postgres"

http://hasura-forest-admin.surge.sh/#/?id=_14-connect-forest...

For Heroku specifically you need to make sure that the client attempting to connect does it over SSL, so set SSL mode if possible (many clients will do this by-default).

To get the connection string for pasting into Forest Admin config, run this:

    heroku config | grep HEROKU_POSTGRESQL
That should give you a connection string you can copy + paste to access externally from Heroku:

    HEROKU_POSTGRESQL_YELLOW_URL: postgres://user3123:passkja83kd8@ec2-117-21-174-214.compute-1.amazonaws.com:6212/db982398
https://devcenter.heroku.com/articles/heroku-postgresql#exte...
iameoghan said 14 days ago:

Awesome, will definitely. I think it was your post in a different thread earlier this year where I came across it originally. I remembered the name as you helped me on the Hasura Discord (Thank you for all your awesome input there) & it looks so promising.

seyz said 13 days ago:

Thank you very much for this article, that's awesome!

gavinray said 13 days ago:

Oh snap, the founder of Forest Admin!

Glad you liked the post, I've been using Forest on both realworld SaaS platforms and small side-startups since early 2017. Really cool to watch how much you've evolved since then.

Also, Louis S. is amazing! I've sent two emails to you guys over the years, Louis answered both of them within a day.

Throwback to 2017 UI ;)

https://i.imgur.com/KT9Wtlx.png

seyz said 13 days ago:

This comment is epic, thank you very much :-) I'm sure Louis will be super happy to read this as well.

See you soon!

chx said 13 days ago:

Zig. There's a Why Zig When There is Already CPP, D, and Rust? writeup at https://github.com/ziglang/zig/wiki/Why-Zig-When-There-is-Al...

jtolds said 13 days ago:

seriously, zig is so amazing. if all zig was was https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace... it would be unbelievable, but it's so much more than that.

jorangreef said 13 days ago:

I came here to say the same thing: Zig. The design decisions are spot on.

For example, Modeling Data Concurrency w/ Asynchronous I/O in Zig by Andrew Kelley: https://t.co/VYNqNcrkH1?amp=1

benibela said 13 days ago:

>D has @property functions, which are methods that you call with what looks like field access, so in the above example, c.d might call a function. Rust has operator overloading, so the + operator might call a function.

But I love properties and operator overloading

pot8n said 13 days ago:

You just made learning Zig on my todo list. Thanks :D

Eugeleo said 13 days ago:

I’m a little surprised that there aren’t any mentions of Obsidian, while there are at least two mentions of Roam. To all Roam lovers, and to all intellectuals in general, I’d recommend you to check out Obsidian [1] from the makers of Dynalist.

It’s also a tool made mainly for Zettelkasten, but it is offline and local by default. It’s not an outliner like Roam, but rather a free-form text editor.

I feel that Obsidian’s values align more closely with the values of a general HN reader. For example, the files (Zettels?) are plain markdown files, so the portability is much higher than what is the case with Roam (which is online only, and your data is somewhere in a database in a proprietary format).

Another example would be the support for plugins, which are first-class citizens (although the API is yet undocumented) — many of the core features are implemented as plugins and can be turned off.

And there’s a Discord channel where you can discuss with the devs, which are very responsive — so much so that I’m surprised they can rollout new features so quickly (at least one feature update per week, from my limited experience with Obsidian).

(Not affiliated in any way, just a happy user. I copied most of this comment from another comment of mine)

[1]: https://obsidian.md/

sho said 13 days ago:

  I’m a little surprised that there aren’t any mentions of Obsidian, while there are at least two mentions of Roam. To all Roam lovers, and to all intellectuals in general, I’d recommend you to check out Obsidian [1] from the makers of Dynalist.

  It’s also a tool made mainly for Zettelkasten
Just so you know, I consider myself probably a fairly typical HN user. Got my own little daily tech concerns, but keep a toe in the water of the larger zeitgeist. I have no idea what anything you just said means. You could be talking about brands of car or my little ponies for all I know. Googling it - seems it's something to do with notes?

Just remember that not everyone is in your little concern-bubble, and one or two explanatory sentences would be very welcome.

marvinblum said 13 days ago:

I wrote an article about Luhmann's Zettelkasten if you are interested: https://emvi.com/blog/luhmanns-zettelkasten-a-productivity-t...

Eugeleo said 12 days ago:

I had to stop somewhere with the explanations. I was mainly addressing people already familiar with Roam, and also decided that Zettelkasten as a term is quite easily googleable. It’s true I could have slipped a few words there, along the lines of “...which is a note-taking technique” — I’ll make sure to do that next time.

anotheryou said 13 days ago:

the other thing is https://roamresearch.com/

It's a text based wiki or outliner (collapsible text) thought a step further, with auto backlinks etc.

Feels to me like a weaker org-mode, online with better cross-links/embeds (something that is indeed uncool in org. things can't live in two places at once)

skosch said 13 days ago:

There is org-roam, and it's getting better by the day: https://github.com/org-roam/org-roam

anotheryou said 13 days ago:

Yea, have to try that some time.

I'll need to switch from one biiiig file to multiple than though. I think my biggest hindrance is setting up the refile targeting :)

iameoghan said 13 days ago:

Gonna check this out, seems very useful!

maccam94 said 13 days ago:

1. Starship - https://www.spacex.com/vehicles/starship/

Completely reusable rocket that can carry 100 tons into low Earth orbit, refuel, and then continue on to places like Mars. Launches are estimated to cost SpaceX about $2M, compared to the SLS $1B (estimated, similar lift capability) and space shuttle $1B (27 tons). The engines are real, test vehicles are flying (another test launch is likely to happen in the next week or two). Follow the SpaceX subreddit for progress updates

2. Commonwealth Fusion Systems - http://cfs.energy

Lots of reactors have struggled with scale and plasma instability. CFS has adopted a design using new REBCO high temperature superconductor magnets that are stronger and smaller, which can be used to build smaller reactors and better stabilize the plasma. They are building a prototype called SPARC, expected to produce net energy gain by 2025.

mindvirus said 13 days ago:

Subvocal recognition: https://en.wikipedia.org/wiki/Subvocal_recognition Imagine how much more people would use voice input if they could do it silently.

Also neural interfaces like CTRL-labs was building before being acquired. Imagine if you could navigate and type at full speed while standing on the subway.

I think that rich, high fidelity inputs like those are going to be key to ambient computing really taking off.

mNovak said 13 days ago:

Been wanting subvocalization since reading the Ender series

McTossOut said 13 days ago:

Google assistant circa 2012 felt a bit like this already... and then scope creep made things too dumb for me to even use.

EvanWard97 said 13 days ago:

- Far UVC lights (200 to ~222nm) such as Ushio's Care222 tech. This light destroys pathogens quickly while not seeming to damage human skin or eyes.

- FPGAs. I'm no computer engineer, but it seems like this tech is going to soon drastically increase our compute.

- Augur, among other prediction platforms. Beliefs will pay rent.

- Web Assembly, as noted elsewhere. One use case I haven't read yet here is distributed computing. BOINC via WASM could facilitate dozens more users to join the network.

- Decision-making software, particularly that which leverages random variable inputs and uses Monte Carlo methods, and helps elicit the most accurate predictions and preferences of the user.

jeffreyrogers said 13 days ago:

I'm an FPGA engineer and I doubt they will go mainstream. They work great for prototyping, low-volume production, or products that need flexibility in features, but they are hard to use (unlikely to get better in my opinion) and it's hard to see where they would fit into a compute pipeline given that you need to transfer the data to the FPGA, perform your computation/processing, and then transfer the data back.

That said, they are very cool! And learning to create FPGA designs teaches you a lot about how processors and other low level stuff works.

cinquemb said 13 days ago:

>it's hard to see where they would fit into a compute pipeline given that you need to transfer the data to the FPGA, perform your computation/processing, and then transfer the data back.

I see them going mainstream when brain computer interfaces go mainstream (prob a long way away) since a lot of it (in my experience working in a couple of labs and some related hardware) depends on processing a lot of the data from the sensors, of which most is thrown away due to the sheer volume, and transferring it back and being able to update the filtration matrices easily tailored to sampled data.

ironman1478 said 13 days ago:

Fpgas are too expensive, power hungry, and large. We use them for many tasks at my workplace and we are spinning up an ASIC team because using fpgas just doesn't meet our power and size requirements. Also, building asics can be cheaper in the long run if the future of what needs to be done is relatively stable.

cinquemb said 12 days ago:

> Also, building asics can be cheaper in the long run if the future of what needs to be done is relatively stable.

I don't doubt it, yet I found hard to describe the human brain over time, especially across people, as that; at least from a DSP and beamforimg of impedance measurements from the scalp to gauge the relative output of power at variable regions in the brain perspective.

ponker said 13 days ago:

FPGAs will go mainstream if software can automatically program them. Imagine some watchdog inspecting the CPU and figuring out what it needs hardware acceleration for, building that netlist or pulling it from a large library, and sending it to the FPGA and then routing work from the CPU over to the FPGA.

lvs said 13 days ago:

> Far UVC lights (200 to ~222nm)

OK, these are not safe wavelengths, and whatever you're reading is not right. This is absolutely ionizing radiation. The rate of formation of thymine dimers in this regime is similar to that around 260 nm. That is, it causes DNA damage. Please see Figure 8 below:

https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1751-1097....

The logic of the claim that you can destroy a pathogen with UV but not cause damage to human tissues is incongruous. If it kills the pathogen, it also causes radiation damage to human tissues as well. One cannot dissociate these because they are caused by the same photoionization mechanism.

toomuchtodo said 13 days ago:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5552051/

> We have previously shown that 207-nm ultraviolet (UV) light has similar antimicrobial properties as typical germicidal UV light (254 nm), but without inducing mammalian skin damage. The biophysical rationale is based on the limited penetration distance of 207-nm light in biological samples (e.g. stratum corneum) compared with that of 254-nm light. Here we extended our previous studies to 222-nm light and tested the hypothesis that there exists a narrow wavelength window in the far-UVC region, from around 200–222 nm, which is significantly harmful to bacteria, but without damaging cells in tissues.

> As predicted by biophysical considerations and in agreement with our previous findings, far-UVC light in the range of 200–222 nm kills bacteria efficiently regardless of their drug-resistant proficiency, but without the skin damaging effects associated with conventional germicidal UV exposure.

ijpsud said 11 days ago:

So if I'm reading correctly, the 207-nm ultraviolet light simply doesn't make it past the outer (dead) layer of skin.

lvs said 9 days ago:

That's not relevant, and the paper itself doesn't really measure anything pertinent either. Ionizing radiation does not cause molecular ionization that stays in one place. It generates free radicals that propagate in reaction chains. Reducing the penetration depth only increases the volumetric dose.

toomuchtodo said 10 days ago:

Correct, but I’d still like to see their data as to what the impact is to eye tissue.

formercoder said 13 days ago:

FPGAs have been around for quite awhile. Is something changing?

Kliment said 13 days ago:

Non-stupid open toolchains are slowly happening. Vendor toolchains are the biggest thing holding back FPGAs. Everyone hates them, they're slow, huge, and annoying to use.

hikarudo said 13 days ago:

One thing that is changing quickly: deep learning, particularly inference on the edge. FPGAs are more versatile than ASICs.

BooneJS said 13 days ago:

Everyone making ML ASICs would disagree.

formercoder said 13 days ago:

This just provides a cost advantage though right? I mean that’s great, love me some margin, but it’s not really a new frontier. Unless I’m wrong?

wtfno009887466 said 13 days ago:

Dozens!

akavel said 13 days ago:

https://luna-lang.org - a dual-representation (visual & textual) programming language

RISC-V

Zig programming language

Nim programming language

(also some stuff mentioned by others, like WASM, Rust, Nix/NixOS)

canada_dry said 13 days ago:

> luna-lang

Whoa... had to do a double take there.

Great to see luna seems to be alive yet again - now "enso lang" per github [i]. A git commit just days ago... so here's hoping! It is such a great concept.

[i] https://github.com/luna/ide

Balgair said 13 days ago:

Optogenetics [0]. Light changes electrical behavior in cells. AKA, point laser, neurons fire, I know kung-fu

Memristors [1] Rebuilding the entire computer from EE basics. New 'color' added to EE spectrum, now computers process huge datasets on a watch battery

CRISPR-CaS9 [2] Tricks bacteria used to keeps viruses out are pretty slick. Crtl-C, Ctrl-V for gene editing. $100B technology, easily.

Strangely (encouragingly?) all these words are 'misspelled'

NOTE: I am massively oversimplifying things.

[0] https://en.wikipedia.org/wiki/Optogenetics

[1] https://en.wikipedia.org/wiki/Memristor

[2] https://en.wikipedia.org/wiki/CRISPR

freehunter said 13 days ago:

I was excited for memristors in 2008 when HP announced they were right around the corner. They even built a new computer and a new OS to take advantage of memristors [1]. And then it never happened and no one has ever built one and it’s pretty much vaporware. I would be hesitant to trust anyone who says they’re anywhere close. It’s just not a technology that actually exists.

[1] https://www.extremetech.com/extreme/207897-hp-kills-the-mach...

hanniabu said 13 days ago:

> They even built a new computer and a new OS to take advantage of memristors

I could be wrong but I think I remember reading somewhere they ran into patent infringement issues that they couldn't get around or something like that.

hobofan said 13 days ago:

CRISPR-CaS9 seems to be pretty much "done" already. It's already used by many labs and high-profile projects, has transformed upcoming gene therapy pipelines and the main problems are being ironed out. I don't think that there is any doubt anymore that CRISPR is a big milestone in biotech.

hnick said 13 days ago:

VR. It seems just about ready, but still a little too expensive.

While good games are obviously already there, I'm more curious about work. Would an infinite desktop with an appropriate interface beat the reliable old 2x24" screen setup I have? I think it could.

mindvirus said 13 days ago:

I've had so many moments in VR where I could glimpse the future, I'm definitely bullish. The problems seem incrementally solvable - display resolution portability and comfort seem like they are easy enough to solve with time, and better/higher fidelity inputs.

A big thing with it as well I think will be focus, I'd love to be able to entirely shut out the world while working on something for 90 minutes or so.

This is one where I think it'll get to be good enough outside of niche gaming and just take off - my prediction is it'll take about 6 more years (i.e. 3 more release cycles) before the hardware is past the post.

signaru said 13 days ago:

Not just for games. Imagine VR tourism, meetings... or even escapism. Especially these days. Last time I used one the graphics isn't as immersive yet. I don't mind tethering it to a powerful computer. But image quality is a must.

duncanawoods said 13 days ago:

I enjoy VR games but none of those sound attractive to me or at least with head-seats and controllers.

Interactive virtual worlds are wondrous but not actually terrible practical for traditional tasks. Something that is perfect though is training for roles involving a lot of awareness and physical interaction in rare and extreme environments e.g. emergency workers, police, soldiers etc.

zmmmmm said 13 days ago:

I think it's at an interesting tipping point.

Products like the Quest are crossing the threshold to where it's affordable, completely self-contained and high quality enough to provide a great true VR experience. They need to about halve the cost while maintaining quality and if they can do that then there is no reason why this shouldn't explode into the market place.

radarthreat said 13 days ago:

Nobody wants to wear a VR headset all day

dvt said 13 days ago:

> While good games are obviously already there, I'm more curious about work.

Good games are most definitely not there. The consensus is that Alyx is really the only worthwhile VR title. Just about everything else is gimmicky and trite. VR still has a long way to go.

XCSme said 13 days ago:

I play Eleven Table Tennis (as the table tennis clubs were closed due covid). That game is the best simulation game you can play today. The physics are very close to reality, so close that in-game and IRL skills are immediately transferable. The biggest issues of the game that I encounter are not with the game itself, but with the tracking limitations of the Rift S inside-out tracking.

hnick said 13 days ago:

I'm in no position to judge having played none, but I've heard glowing reviews for more than just Alyx but didn't commit any titles to memory since I'm not planning to be an early adopter.

However, some do admit current VR is heavily carried by the novelty of using your hands (much like the Wii's motion controls made many average games enjoyable while it was fresh).

duncanawoods said 13 days ago:

Skyrim and Fallout 4 are breathtaking in VR.

kvz said 13 days ago:

Nix—It takes buy-in, but very worth it for us. Builds are reliable, reproducible, can exist alongside one another. Plug an S3 powered CloudFront cache into it, and you’re never building the same thing more than once.

Deno—Sandboxed by default seems a powerful way to offer our customers to run custom code. Native TypeScript, build single bins. I have still to play around with it but those all seem like compelling advantages over Node.js

ianai said 13 days ago:

Crazy idea time. Is anyone piping randomly generated code into nix and selecting for AI in the output? (I’m pretty far out of my realm here so sorry in advance)

mkl said 13 days ago:

The search space of possible code is unfathomably enormous. I think you'd have better luck generating amazing art with random colours for each pixel (i.e. still none).

People have done more limited genetic programming for a long time now (essentially randomly mutating formulas, keeping ones that do better), but neural networks are doing the arbitrary function-fitting better at the moment.

What does it have to do with Nix, though?

ianai said 13 days ago:

Bleary eyed me thought nix sounded well defined enough to make searching the input space more tractable.

dnautics said 13 days ago:

Zig Programming language.

Because it's basically C-+, It's extremely easy to use, and also extremely easy to parse, so (and maybe this is self-serving because I did it) I see a lot of automated code-generation tools to hook Zig into other PLs.

C's age is starting to show (and it's becoming a problem), and I think it has the opportunity to be sort of a place for old C heads like me to go to stay relevant, modern, safe, and productive.

quack01 said 13 days ago:

All the new products around WireGuard. I'm so tired of running VPNs. NAT traversal with protocols like STUN, TURN and ICE are going to allow point-to-point networks for all the clients.

https://tailscale.com/

https://portal.cloud/app/subspace

api said 13 days ago:

Zerotier.com did this years ago and it works great.

aidanhs said 13 days ago:

I'm kinda sad for you - I've been using and advocating zerotier for a while (it's amazing and indispensable)...but in my circles the word 'wireguard' has got people excited, which (anecdotally) is benefitting tailscale and generating more hype around them than zerotier ever got. Hopefully a rising tide will lift all ships and you find a way to capitalise on it :)

(I prefer device-based zerotier-style access rather than login-based tailscale-style so that does sway me to zerotier...but I have to admit tailscale looks more polished, e.g. the screenshot of seeing other devices on the network. I get it's not a fundamental feature! But I can't help but appreciate it)

api said 13 days ago:

We are doing fine and V2 is coming soon with a ton of improvements. I just have to occasionally point out our existence again.

The pulldown showing other devices on a network does look spiffy but that wont scale. We have users with thousands of devices on a virtual LAN and the protocol will scale far larger. Not only will that not fit in a menu but any system that relies on a master list will fall down. That list and refreshing it will get huge.

We are doing the tech first, spiff second.

chanux said 13 days ago:

Some feedback. I think I stumbled upon Zerotier a while back and didn't really get what it is. IIRC it felt like something that is only useful for big companies, exactly what I felt today.

I think the website could do a better job showcasing how it's used.

Hope my feedback is helpful and wish all the best!

api said 13 days ago:

Our web site kind of sucks. We're going to be working with a design/marketing firm to re-do it soon.

It's kind of hard to explain ZeroTier sometimes. Its so simple (to the user) people have a hard time getting it.

"You just make a network and connect stuff." Huh?

People have been conditioned to think networking is hard because 90% of networking software is crap.

derekja said 13 days ago:

UI issues for sure, but the product is great. I have computers in 3 different organizations and the ability to tie them into a coherent virtual site so they can all talk to each other is amazing. I no longer have to worry about having forgotten some file on my home network that I needed at the university, for instance. Looking forward to V2!

ramzis said 13 days ago:

Thanks for ZeroTier! Managed to convert a few friends from using Hamachi for LAN games, which was always a pain to setup previously. It simply just works for my needs.

chanux said 12 days ago:

ooh! so it could replace Hamachi. I think this is one use case (without using the product name) that can be listed in a uses-cases page. Hope other Zerotier users would chime in with more use cases.

api said 12 days ago:

ZeroTier emulates a L2 Ethernet switch over any network, so anything you can do with Ethernet basically.

You make networks, add stuff to them, and any protocol you want just works: games, ssh, sftp, http, drive mounts (though they can be slow over the Internet), video chat, VoIP, even stuff like BGP or old protocols like IPX work.

aidanhs said 13 days ago:

Really happy to hear it's all going well, and I've been excited about V2 since I read the blog post about it - your product is awesome and solves a genuine need, and I really want you to succeed.

northern-lights said 13 days ago:

https://www.sens.org : Solving the problem of aging and diseases of aging. Watch a few interviews of Aubrey de Grey to get a better idea of the possibilities of their research. Though this would come under the "to watch" not for the immediate future but for the next decade or two.

mindfulplay said 13 days ago:

One thing that's not clear to me is the advantage of living longer. Why do some people feel the need to live longer?

When I hear of blood transfusion and such, it also feels like a lot of these technologies are being developed by snakeoil salespeople to other nongullible but strictly egocentric humans.

mindvirus said 13 days ago:

One thing with aging research is it's also about healthier living - for example, letting people be healthier well into old age, even if the number of years is the same. In terms of why people want to live longer, I think it's just human nature at the end of the day - more time to do the things you enjoy and help the people you care about.

Practically speaking, there are a lot of advantages to longer lives:

- Generally, it means people will be healthier, which means reduced societal burden.

- A deeper family structure can mean better education and childcare. You have more time to be with your friends and family, more time to pursue hobbies, more time to explore the world.

- Scientists, engineers and researchers can spend more time building and leveraging their expertise. If people live 20% longer for example, I suspect there's more than a 20% increase in advancement because in so many fields breakthroughs happen near the end of your career.

Of course, there will be consequences that need to be addressed.

- Does this only lead to increased inequity, where the wealthy are able to accumulate wealth and knowledge even more easily due to access to anti aging. Already there's a 14 year difference in life expectancy between the richest and poorest 1%, imagine if that was 50 years.

- How do we adjust our social safety nets when people are living to 100 instead of 80?

- How does this change over-population and over-consumption?

mindfulplay said 12 days ago:

I see your point. I think there is an angle there. However, still feels like a rich person's game here: not sure if the average or below average conditions will improve (childhood mortality, adult mortality) in developing countries because of this.

That's really where the "average" advantage seems to be: if life becomes better for everyone then they live longer.

mrleinad said 13 days ago:

So many interesting links. This post should be a regular on HN.

kpierce said 13 days ago:

Especially now. Ive been using HN for years and before covid I could see all the posts from the day before in 20 mins. Now it will take 2-3 hours. Everyone is sharing.

askjdlkasdjsd said 13 days ago:

A friend of mine is working on coscout. It's in beta right now, but he showed me some pretty insane machine learning based insights for companies, investors and founders.

Things like

- When will this company raise the next round? - What is the net worth of <literally anyone>? - What is the probability that this investor will invest in you? (given your sector, founder age, pedigree, gender, market conditions, do you have an MVP or not etc.) - A bunch of other complicated stuff I didn't really understand

Definitely worth keeping an eye on if you're into this kinda stuff: https://coscout.com/

mindvirus said 13 days ago:

Feedback loops in this sort of thing always scare me. For example - say people of one demographic are less likely to fund raise, so the model says they're less likely to succeed, so investors using the model don't invest in them and they are put at an even further disadvantage. And so something that is inherently data driven can end up moving further away from the meritocratic ideal it's likely trying for.

And the thing is, it's hard to get this bias out of models - almost everything ends up correlating to age, race, gender and so on - zip codes, income, schools, past employers, etc.

askjdlkasdjsd said 12 days ago:

Agreed. It's definitely up to the user to make smart decisions.

However, it's not so cut and dry either. In my last company (B2C mobile app based), we were pretty much getting beat by several competitors. And it showed across all metrics, ratings/reviews/downloads/web traffic/retention/engagement - what have you.

And later on we found out that the founders had been fudging up the metrics and presenting to investors which is why they actually were never able to raise the round, but came very close. By straight up lying.

If some form of business/product intelligence is used to identify such red flags, it can save a lot of bad decisions and heartache from ever happening for everyone involved.

In that regard, I welcome more empirical evidence based decision making (aka statistics/machine learning etc.) where it's appropriate.

ashishrvce said 13 days ago:

Is there any way I can get access to this? The product seems intriguing and I can be a paid customer.

askjdlkasdjsd said 13 days ago:

They said they're in the final stages of testing right now, they'll first roll out slowly to customers that they're already working with, then the waitlist and then do a wider release for everyone by the end of next month.

Best bet would be to sign up on the waitlist for now https://coscout.com/dashboard

busterarm said 13 days ago:

I guess I'm much more conservative than other folks, but I think we've scratched only 10% of the surface of the benefits that things like Kubernetes, Consul, Vault and Terraform should/will provide.

So they're on the list. I feel like at my job I'm pushing at the edges (as far as running large scale, stable production) and we've still got miles left.

Also Bazel.

I guess this is a boring answer.

elevenoh said 13 days ago:

>I think we've scratched only 10% of the surface of the benefits that things like Kubernetes, Consul, Vault and Terraform should/will provide.

What benefits are we not seeing?

So many apps over-engineer their scalability.

busterarm said 13 days ago:

What benefits aren't we seeing?

End-to-end automation still isn't done in most places and it's considered hard.

Having made a significant automation investment, I can say that it's easier after you've put some of the work in. It trends towards easiness but up front cant seem insurmountably hard.

Caveat: Our infrastructure bill is stretching about ten million yearly and our team is small (avg. about 1M being managed per person), size your expectations appropriately.

thundergolfer said 13 days ago:

Was also going to post Bazel.

Here's the best 'elevator pitch' for Bazel that I know of. 3 minutes from Oscar Boykin about why Bazel (or at least a Bazel-like system) is the future.

https://youtu.be/t_Omlhh7IJc?t=40

bionhoward said 13 days ago:

Terraform is a game changer and easy to learn

timClicks said 13 days ago:

Happy to be told otherwise, but I think that Juju is the only tool in that space that understands inter-connected applications and to spin up services that span k8s/VMs/clouds and work together.

kortex said 13 days ago:

Is there anything like "terraform provider for bare metal"? Would be soo convenient to just go from full nuke and pave to functional dev machine with a single config repo.

redis_mlc said 13 days ago:

Terraform alone just does infra provisioning, but it can call a script for application setup.

jcims said 13 days ago:

pxe boot?

juvoni said 13 days ago:

Roam Research https://roamresearch.com/

A tool for networked thought that has been an effective "Second Brain" for me.

I'm writing way more than ever through daily notes and the bi-directly linking of notes enables me to build smarter connections between notes and structure my thoughts in a way that helps me take more action and build stronger ideas over time.

Eugeleo said 13 days ago:

I’d recommend you to check out Obsidian [1] from the makers of Dynalist. It’s also a tool made mainly for Zettelkasten, but it is offline and local by default. It’s not an outliner like Roam, but rather a free-form text editor.

I feel that Obsidian’s values align more closely with the values of a general HN reader. For example, the files (Zettels?) are plain markdown files, so the portability is much higher than what is the case with Roam (which is online only, and your data is somewhere in a database in a proprietary format).

Another example would be the support for plugins, which are first-class citizens (although the API is yet undocumented) — many of the core features are implemented as plugins and can be turned off.

And there’s a Discord channel where you can discuss with the devs, which are very responsive — so much so that I’m surprised they can rollout new features so quickly (at least one feature update per week, from my limited experience with Obsidian).

(Not affiliated in any way, just a happy user)

[1]: https://obsidian.md/

gnramires said 13 days ago:

I've had good experiences with personal Wikis before, but have fallen back to plain notes. I think notetaking by itself is immensely powerful and underappreciated in general (wish I had started earlier), and all that's necessary is building a habit out of it. Maybe this can give it a little extra spice (hopefully not as cumbersome as a full blown personal website).

Eugeleo said 13 days ago:

I can recommend this video [1] from the author of How to Take Smart Notes. The whole Zettelkasten is a great idea, and he explained it succinctly in that talk. He also compares the status quo methods of note taking with the Zettelkasten, which for me was very eye-opening

[1]: https://vimeo.com/275530205

devericx said 13 days ago:

Notational Velocity [0] seems to be something very similar, if not the exact same, except it's a macOS app and not a web app.

[0] http://notational.net

thadk said 13 days ago:

thanks! Longtime user of nvAlt and I never noticed that.

eppsilon said 10 days ago:

FYI: the nvAlt developer is working on a new version: https://brettterpstra.com/2019/04/10/codename-nvultra/

typon said 13 days ago:

How's this different from hypertext? (I genuinely don't know)

Eugeleo said 13 days ago:

Not sure about the specific features of hypertext, but in general: bi-directional linking, block references (Roam is an outliner like Dynalist or Workflowy), block transclusion, graph view of your page network...

Of course, you could throw a bunch of scripts together to approximate these features — but you don’t have to, since Roam (and Obsidian and others) exists.

said 13 days ago:
[deleted]
iameoghan said 13 days ago:

Good shout - that's been on my watch list for a while now. Thanks for the reminder!

sidhanthp said 13 days ago:

The hype on Twitter can get a bit annoying - but Roam is seriously awesome.

MiroF said 13 days ago:

This is just tiddlywiki, no?

Eugeleo said 13 days ago:

Considering that Tiddlywiki has around 4 plugins that are supposed to make it more like Roam, I’d say that probably Roam isn’t just like TiddlyWiki.

Now, I’m not a TW user, but I think things like block references, outliner features, and bi-directional linking aren’t there by default.

bionhoward said 13 days ago:

Ubuntu, ParrotOS, Kali

Julia Lang is fun

For devops, Pulumi/CDK

I watch graph dbs but they all suck or are too expensive or have insane licenses (Neo4j, RedisGraph)

Differentiable programming, Zygote, Jax, PyTorch, TF/Keras

Optimal transport (sliced fused gromov-wasserstein)

AGI, levin, solomonoff, hutter, schmidhuber, friston

Ramda is amazing

George Church’s publications

im also super interested in molecular programming

DEAP is fun for tree GP a la Koza

Judea Pearl’s work will change the world once folks grok it

Secure multiparty computation

maccam94 said 13 days ago:

I looked into pulumi last week, and it seems cool but I think they need to rework their library design to avoid fracturing their ecosystem for each language (or just standardize on one language).

andrewnc said 13 days ago:

As a optimal transport lover working on differential programming, I approve this message. :)

treelovinhippie said 13 days ago:

Svelte is my go-to for personal projects. My speculative hunch is it will begin to rival React within the next 5 years for its simplicity and thus cost reductions.

There are a lot of advantages, but this 5min video comparing React hooks to Svelte should be enough to trigger interest: https://www.youtube.com/watch?v=YtD2mWDQnxM

Aeolun said 13 days ago:

I’m really curious how my 100kloc enterprise app would work in Svelte, but my hunch is it just wouldn’t be possible to build.

Svelte always seems really cool in these toy examples, but I want to see a significant app built with it instead.

treelovinhippie said 13 days ago:

Yeah that's a common myth. Svelte isn't some fringe alternative JS framework. It performs exactly the same functions and same structure as any React/Vue app, but does so with far less code and runs far faster since it's a compiler.

You're not going to find many brand-name companies using it because the PM decision-makers at large enterprises are always going to be many years behind and going with the "safe" JS framework leader at the time.

Well-known companies currently using Svelte: Apple [1], New York Times [2], Spotify [3] and GoDaddy [4]

1: https://twitter.com/mansur_ashraf/status/1204542852581273600 2: Svelte creator Rich Harris works for them 3: https://www.reddit.com/r/sveltejs/comments/f18n33/companies_... 4: https://svelte.dev

XCSme said 13 days ago:

"as any React/Vue app" From my experience, and what I heard from others, a lot of devs prefer React over Vue even though Vue syntax is cleaner and allows for shorter code. React just feels more robust when the app grows large enough compared to Vue. Note that we're not talking only about the library itself and its syntax, but also the ecosystem and support around it.

That being said, I think Svelte -> Vue comparison might be even more imbalanced than Vue -> React.

Blammar said 13 days ago:

Solar energy, carbon dioxide and water directly to butanol. In other words, store solar energy directly as fuel. There are other versions that generate hydrogen, but that has a much lower energy density than liquid fuel.

Just modify your existing ICE to run on butanol and you're good to go. <a bit of hand waving there.>

See https://www.intelligentliving.co/biofuel-solar-energy-co2-wa... for where we were a year ago.

cxam said 13 days ago:

Caddy, specifically v2 (https://caddyserver.com/v2)

I've been using Caddy v2 all through beta/RC and glad it's finally stable with a proper release. I moved away from using nginx for serving static content for my side projects and prototypes. I'm also starting to replace reverse proxying from HAProxy as well. The lightweight config and the automated TLS with Let's Encrypt makes everything a breeze. Definitely has become part of my day-to-day.

jah242 said 13 days ago:

Robotics + Deep Learning - I think we just quietly passed a milestone where robots using deep learning can perform useful real world tasks (selecting items, packing boxes etc)

If true we could be at a watershed for robotics adoption and progress as large scale deployments generate the data to train on more complex tasks, leading to more deployments and so snowballing onwards

This seems like a much more likely process that will lead to a type of “general AI” than researchers pushing us all the way there

Covariant AI (and their partnerships) is what got me thinking: https://covariant.ai/

newsat13 said 13 days ago:

Self hosting - https://cloudron.io

umaar said 13 days ago:

A while back I installed ServerPilot which automatically sets up Nginx/Apache/PHP/MySQL for you. It also handles security updates. This made those $5 VPS' so much more appealing [1] as I could install lots of small Node.js apps on a single server, and avoid managed hosting providers who seem to prefer charging per app instance.

Anyway ServerPilot then scrapped their free plan so I've been looking for an alternative. cloudron looks cool, I don't see anything specific to Node.js/Express, but it does have a LAMP stack which includes Apache, so I might try that. Otherwise I'll probably use something like CapRover [2], a self-hosted platform as a service.

[1] https://twitter.com/umaar/status/1256155563748139009

[2] https://caprover.com/

jedieaston said 13 days ago:

Dokku is an excellent option for this sort of thing, and manages subdomains for you.

http://dokku.viewdocs.io/dokku/

paulgb said 13 days ago:

And SSL is a cinch! I have been very happy with Dokku, I'm surprised I don't see it mentioned around here more often.

lukevp said 13 days ago:

Would love to get your opinion as I'm building a competing product to ServerPilot in this space. Is the $5 too expensive for the service? or is it just too expensive because the billing increases as you have more servers under management, and they charge you per app as well?

Are there features ServerPilot is missing that would justify the price more for you? Some examples might be monitoring, analytics, automated security patching, containerization of workloads, etc.

Would the plan be more appealing if the cost of the plan, the portal, and the VM hosting itself were all rolled into one? (i.e. you would just pay one company, rather than having to sign up for DO as well as ServerPilot).

ethanpil said 13 days ago:

1) Independence of hosting provider is a must. Don't want to be forced to use your VPS service when I have all my infrastructure already on Linode, DO, Vultr, etc.

2) Should be free when used in non-commercial applications. Multiple servers included.

3) Keep the common and already available typical configurations free: lamp, lemp, python, letsencrypt, email. Charge for things which no other panel free or otherwise typically supports. lightspeed, go, caddy, load balancing, sql replication, graphql, etc. Thats value.

said 13 days ago:
[deleted]
exolymph said 13 days ago:

"Self-hosting apps is time consuming and error-prone. Keeping your system up-to-date and secure is a full-time job. Cloudron lets you focus on using the apps and not worry about system administration."

neat, don't think I've seen something like this before!

threeseed said 13 days ago:

It kind of just looks like a simplified version of CPanel which has been on every VPS for the last 20+ years.

said 13 days ago:
[deleted]
exolymph said 11 days ago:

"simplified version of CPanel" is something neat that I haven't seen before

in addition, sometimes people don't know things that you know, and you would do well to keep that in mind: https://xkcd.com/1053/

lostmsu said 13 days ago:

They'd be so much more successful, if "Install" button did not have this:

  wget https://cloudron.io/cloudron-setup
  chmod +x ./cloudron-setup
  ./cloudron-setup --provider [digitalocean,ec2,generic,ovh,...]
gramakri said 13 days ago:

There's a cloudron 1-click image on Digital Ocean

lostmsu said 13 days ago:

Which, as a regular user, I don't understand when I see it.

Hell, I am a dev, and I still did not know that will let me create one quickly.

gramakri said 13 days ago:

Agreed. Do you have any suggestions to improve the initial onboarding?

lostmsu said 12 days ago:

Ideally:

- user picks a cloud (or have a "Advanced" option on the next step instead)

- you show them OpenID/OAuth form for their cloud provider

- guide them through the creation of an account if necessary

- you get the token, that permits your server to create cloud resources on behalf of the user

- you go ahead and create their services for them

- potentially store the token to be able to update the apps automatically

I thought about that, when I was considering to make a similar service (also similar to sandstorm.io). Glad to see somebody doing something in that area (I guess without the permissions model yet).

Problem is: most clouds don't let you easily create an account, so "guide them through the creation of an account" might be impossible without leaving the browser.

unixhero said 12 days ago:

Ahoy gramakri.

I have been a Cloudron user for a bit of time. Recently I have launched a company and we're now a paying and very happy customer of Cloudron's business subscription.

It seems that the "next app suggestion" process have stalled. To me as an outsider of your internal process, I cannot see what applications are being preferred over others. There are tons of very good suggestions which are not receiving traction it seems, from the app suggestion-forum.

A few examples which Cloudron needs, and would benefit from having attracting more users:

- A Wireguard VPN frontend application

- Jupyter Notebooks Environment

- Odoo ERP Community edition

- Erpnext

3fe9a03ccd14ca5 said 13 days ago:

Is that any less secure than “sudo dpkg -i foo.deb”?

lostmsu said 13 days ago:

It is certainly less secure, than just calling the API of those cloud providers directly from the site backend.

JoelMcCracken said 13 days ago:

The subscription price is crazy now. And they don't even do hosting.

hanniabu said 13 days ago:

> And they don't even do hosting.

I'm pretty sure that's their whole point of existence.

javiramos said 13 days ago:

I am particularly interested in food products that will replace animal-based foods. There will be a major consumer shift in the upcoming decades as consumers shift to more sustainable alternatives. This will changes industries, towns and regions.

bitwize said 13 days ago:

Combining statistics-based AI with GOFAI to create systems that can both recognize objects in a sea of data and reason about what they saw.

The MiSTer FPGA-based hardware platform.

RISC-V is gonna change everything. Yeah, RISC-V is good.

bionhoward said 13 days ago:

How do you combine statistics-based AI with GOFAI?

sqrt17 said 13 days ago:

GOFAI basically consists of inference and reasoning techniques, some of which cease to work well when you scale them up too much (computational complexity) or when there is uncertainty involved. There have been some efforts to scale reasoning towards greater scale (description logics) as well as problems with uncertainty (ILP, Markov Logic), but they've been de-emphasized or forgotten in recent times because you get a lot of mileage out of end-to-end deep learning - where essentially hidden state within the network deals with the uncertainty on its own, and where the additional compute overhead + rule engineering effort doesn't seem warranted.

wpietri said 13 days ago:

Darklang: https://darklang.com/

I've tried out an early version of their product, and I really like where they're headed.

freehunter said 13 days ago:

I’d love to try it if they didn’t tie the language to their hosting service. I understand the necessity of the coupling but until someone can start a competing hosting company with the same language, it’s not something I’m interested in.

Gollapalli said 13 days ago:

Huh, Dark looks pretty similar to what I'm doing, albeit significantly more work since they went and developed a whole language and editor. If you're not averse to Clojure, give this a look: https://github.com/acgollapalli/dataworks

wpietri said 13 days ago:

Totally reasonable. I feel the same way, but there are a lot of people who just want something up and running and are happy to accept vendor lock-in risks. I'm sure they'll get there eventually.

topher200 said 13 days ago:

I agree here. It's really exciting to have a platform where the friction of releases is totally removed. I'm excited to see where they end up with this product.

nickreese said 13 days ago:

Cloudflare workers. It was on my watch list at the beginning of the year and I’m just about to out a 20k page “static” (with tons of interactivity) site into production them.

Using it is an API gateway and Kv store for truly static assets is amazing.

kanakiyajay said 13 days ago:

Wrote a blog post on the technologies I believe are going to change industries

1. No-Code Tools 2. GraphQL 3. Oculus Quest 4. FPGA 5. Netflix for Gamers 6. Windows for Linux 7. Notion 8. Animation Engines

https://jay.kanakiya.in/blog/what-i-am-excited-about-in-2020...

jmiskovic said 13 days ago:

FPGA is still too cumbersome to make it big. It's too expensive for general appliances, talent is hard to find, and development process is still stuck where software was 20 years ago. FPGA vendors are still trying roll out their own everything-included non-standard solutions. Those solutions don't scale well. I've seen engineers struggling to trace where some signal ends up, it's complete insanity.

I find GPUs conceptually similar to FPGAs for most soft applications (video processing and similar number crunching). They also provide huge number of re-purposable blocks for programmable parallel computing. GPUs have won out because they became mainstream through gaming and they more readily opened up to general software practices and methodologies. It's no surprise machine learning community is avoiding FPGAs for most part.

kanakiyajay said 13 days ago:

Agreed, FPGA is too expensive currently with a non-standard toolset present across the industry. But if someone is able to create an industry coalition (think Wi-Fi Alliance or Bluetooth SIG) it can definitely make a large impact for everyone involved with several companies reaping the benefits

colemanfoley said 13 days ago:

Great post, thanks.

fergie said 13 days ago:

Solid state batteries.

The tech is tantalizingly close, although not perfected yet. If and when they become available, these batteries will have a far higher energy density and degrade at a far lower rate than existing batteries.

ianai said 13 days ago:

I agree. Is there any chance this is what Tesla’s battery day could include?

hackerbabz said 13 days ago:

https://immersedvr.com/

Virtual monitors in an Oculus Quest that actually works. What’s coming up that will be amazing is hand controls (including a virtual keyboard) and conferencing and collaboration tools.

dkarp said 13 days ago:

I'm going to try this out. I assumed the resolution of the Quest wasn't quite there to make coding in a virtual desktop comfortable. How has your experience been?

hackerbabz said 13 days ago:

I use it every day. With wifi-direct there is zero lag.

I work with 3 1440x900 virtual screens. It’s more than enough for coding and the convenience of multiple screens for free offsets the low resolution.

jmiskovic said 13 days ago:

Did you try increasing internal texture resolution? It makes text crispier. The framerate will drop, but it's tolerable for this use case. I find it very useful. There are various resolutions supported, this is the highest where increase in quality should be most noticeable:

$ adb shell setprop debug.oculus.textureWidth 2048 && adb shell setprop debug.oculus.textureHeight 2048

You have to start application after this is executed. To go back to original, you can reboot device or run this:

$ adb shell setprop debug.oculus.textureWidth 1280 && adb shell setprop debug.oculus.textureHeight 720

hackerbabz said 9 days ago:

I don't even know where to type that in. Do you somehow access a terminal on the oculus or does that send a command from my computer to the oculus?

said 13 days ago:
[deleted]
tomywomy said 13 days ago:

Couldn’t figure out how to add virtual monitors

hackerbabz said 12 days ago:

Are you on Mac? Virtual monitors are only available on Mac for now but are rolling out for windows and Linux soon. You can also use headless monitor plugs in the meantime.

api said 13 days ago:

The general trend toward returning computing to the edge, which is just starting and has been accelerated due to COVID forcing BeyondCorp type practices on us.

Cognitive radio, ultra wide spread spectrum, and other radio tech that promises great range and license free or minimal licensing operation due to lack of interference with other stuff.

Rust is the first serious C/C++ replacement contender IMHO.

RISC-V and other commodity open source silicon.

Cheaper faster ASIC production making custom silicon with 100X+ performance gains feasible for more problems.

Zero knowledge proofs, homomorphic crypto, etc.

mwcampbell said 13 days ago:

> The general trend toward returning computing to the edge

By "the edge", do you mean users' devices, or just more local data centers a la Cloudflare?

api said 13 days ago:

I mean users' devices and to a lesser extent things like federated infrastructure that's "closer to the user" socially speaking.

It's a trend in the earliest stages, sort of like cloud was in the early 2000s.

tootie said 13 days ago:

Gaze tracking. I've used the dedicated gaze tracking sensors from Tobii and it's really natural and responsive. I think we're going to see a lot of touchless interaction become popular in the post-covid world.

geewee said 13 days ago:

I agree. While there are natural limits to how precise the eye is for interactions (eyes naturally flicker back and forth) I definitely also feel like there's potential here. I did a university project on combining voice and gaze tracking for programming - and while gaze is good for e.g. selecting a region of the screen, it's hard to click small things with it.

luckylion said 13 days ago:

How accurate are those sensors? I've often thought how nice it would be to get rid of the mouse and use sensors to figure out where exactly on my screen I'm looking.

tootie said 13 days ago:

Mouse-level accuracy requires a well-calibrated setup and the correct size monitor and sitting posture. If you want to do something like a Square POS checkout and need to distinguish a random visitor looking at 4 buttons, it would be pretty forgiving.

luckylion said 13 days ago:

Thank you, so not yet an option for me; I doubt my posture and sitting position is regular enough that calibration would work for me.

Might, at some point, be a welcome addition to a touch pad though. If you touch the pad and your pointer is far away to where you're looking, jump to the area and do the fine tuning with the fingers.

mokanfar said 10 days ago:

I already do that without the need of a touch pad, just use your numpad keys as a mouse if you're on windows look into mouse.ahk. Tobii will snap mouse to where you're looking at and senses mouse is moving. Works great when you want to stay on home row, selecting text with it though, not as good.

followtherhythm said 13 days ago:
threeseed said 13 days ago:

Looks like the ScyllaDB playbook i.e. rewrite a popular Java app in C++ and sell it as a much faster product.

Going to be interesting to see if they survive as the pace of JVM improvements has been rapidly increasing in the last year or so.

agallego said 13 days ago:

thanks, though what we sell is operational simplicity. speed is nice, but not the main benefit. a single binary that's easy to run is what CIO seem to be interested. though we are young. fingers crossed it works :)

philipkglass said 13 days ago:

I agree that operational simplicity will sell this to more organizations than performance will. There just aren't that many companies in the world that are bumping up against the scaling limits of Kafka.

When I look at the landing page of vectorized.io it touts the speed repeatedly without mentioning this simplicity pitch you find deeper in the site:

Redpanda is a single binary you can scp to your servers and be done. We got rid of Zookeeper®, the JVM and garbage collection for a streamlined, predictable system.

That does sound great! Put that information right up front.

agallego said 11 days ago:

Thank you! Will do! <3

seibelj said 13 days ago:

zksnarks https://blog.ethereum.org/2016/12/05/zksnarks-in-a-nutshell/

Essentially let’s you verify computation is accurate without doing the computation yourself, and even treating the computation as a black box so you don’t know what is computed. Many applications in privacy, but also for outsourced computation.

CalmStorm said 13 days ago:

One important weakness of zkSnarks is that it requires a trusted setup, for example [1]. A new alternative is called zk-STARK [2], which doesn't require the trusted setup, and is post-quantum secure. However, it significantly increases the size of the proof (around ~50KB). In general, hash-based post-quantum algorithms require bigger size and it would be interesting to watch the progress made in this regard.

[1]https://filecoin.io/blog/participate-in-our-trusted-setup-ce...

[2] https://eprint.iacr.org/2018/046.pdf

hanniabu said 13 days ago:

If I'm not mistaken, I believe they're found a way for a trustless setup a few months ago. Unfortunately I don't have any more info on hand, but I remember reading that in passing in regards to research performed by Ethereum developers.

abecedarius said 13 days ago:

I'm not up on the math, but https://electriccoin.co/blog/halo-recursive-proof-compositio... sounded like that sort of thing.

neonhat said 13 days ago:

Rainway: https://rainway.com/ Google Stadia: https://stadia.google.com/

It's not even about gaming. Fuck gaming. It's about the underlying streaming technology.

Imagine this same tech being used by a surgeon to perform surgery remotely. That's the type of use case I'm thinking about!

grogenaut said 13 days ago:

the reason these things work is because they use a datacenter in your city. Thats why low latency. You'd have to have doctor in the same locality, which is not what I think you are thinking.

dkarp said 13 days ago:

Add http://shadow.tech/ to that list

BIackSwan said 13 days ago:

Tailscale - riding on the wireguard wave - https://tailscale.com/

Also Wireguard - https://www.wireguard.com/

canada_dry said 13 days ago:

> Tailscale

I'd recommend this alternative that doesn't require a 3rd party - which is one of the reasons to implement wireguard over a traditional VPN:

https://github.com/subspacecommunity/subspace

pot8n said 13 days ago:

I am sure I wouldn't use a service that can literally get into each and every device of my private network if they want to or worse, they get hacked. Each and every device in the network automatically accepts whatever public keys and endpoints that get advertised by their servers and automatically connect to them. It's not only an overpriced mediocre product. From a security perspective, it's the most dangerous SaaS service I've ever seen.

My biggest fear is once this company gets tied to WireGuard and the security disasters come out, WireGuard's fate will be tied to a mediocre commercial product that put money above engineering decisions.

vich said 13 days ago:

Not sure if this counts, but I look forward to seeing the future of meat alternatives - impossiblefoods.com, beyondmeat.com, eatnuggs.com, etc.

iameoghan said 13 days ago:

BioTech definitely counts haha

gok said 13 days ago:

Arm and RISC-V are both getting scalable vector compute support. Could lead to GPU-like compute capabilities without all the goofiness of GPUs.

The H.266 / VVC video compression standard will be finalized in a few months. Ignoring licensing issues (yes patents blah blah blah) industry-wide efficiency wins like that are always nice.

Generative machine learning (think GANs for images or GPT-2 for text) can be applied to video games. Truly unique narrative experiences!

Everything remote work-related. I previously thought my career would miss the WFH revolution and most knowledge workers would still go to the office until at least 2050, but now it seems clear that is going to get dramatically accelerated.

rsync said 13 days ago:

NextDNS (nextdns.io) is a genius idea that I very much wish I had thought of. I am a paying customer as of this past week and am integrating it in all the places I always meant to put a pihole ...

hanniabu said 13 days ago:

Is there nothing like this that's open source and can be used locally instead of the cloud?

Yoofie said 13 days ago:

I think you are looking for Pihole. [1]

[1]: https://pi-hole.net/

iameoghan said 13 days ago:

That's really interesting.

onimishra said 13 days ago:

Uizard - Using machine learning to get from a drawing to ui and code. As a programmer, I’m looking forward to getting a head start on my personal projects, before I need to involve a designer

https://uizard.io/

young_unixer said 13 days ago:

Low-level stuff & Linux: RISC-V, Vulkan, Wayland, Sway WM, Wireguard, Zig.

Web or high-level stuff: deno, Svelte, Vue, Webassembly, WebGPU, Flutter.

bionhoward said 13 days ago:

I had a blast writing a graph editor in Svelte but it was hard to debug. That was right after 3.0 came out, if it’s easier to debug now then I would love to build with it

pgt said 13 days ago:

Differential Dataflow is going to change the way apps are built: https://materialize.io/

gsvclass said 13 days ago:

I'd say my project Super Graph it's a GraphQL to SQL compiler in GO. It saves developers thousands of man-hours and 10x their productivity. 80% of web backend coding is struggling with ORMs, SQL and writing APIs. Super Graph does away with all that. A simple GraphQL queries get you the data you need.

https://github.com/dosco/super-graph

cpursley said 13 days ago:

Hasura + React Admin.

Combine Hasura (automatic GraphQL on top of PostgreSQL) with React Admin (low code CRUD apps) and you can build an entire back office admin suite or form app (API endpoints and admin front end) in a matter of hours.

This adaptor connects react-admin with Hasura: https://github.com/Steams/ra-data-hasura-graphql

iameoghan said 13 days ago:

That's really handy to know. Was looking for an alternative to forest admin.

clouddrover said 13 days ago:

Further EV improvements. Companies like Lucid and Lightyear are doing interesting things on the EV efficiency side, though their cars are not aimed at the mass market. Lightyear is looking to commercialize their solar panels for other automakers as well:

https://lucidmotors.com/

https://lightyear.one/

Volkswagen and Hyundai are doing interesting things on the mass market side of EVs. Volkswagen is now the number 1 BEV maker in Europe and will probably become number 1 in China in a year or two:

https://www.schmidtmatthias.de/post/april-2020-european-elec...

Hyundai is also starting a big EV push. Their future 800-volt cars should be interesting:

https://www.autoexpress.co.uk/hyundai/109135/new-hyundai-pro...

threeseed said 13 days ago:

NVME over Fabric.

Only started to become available last year in AWS' more expensive instance types. But hoping it will become more widespread.

Benchmarks with Spark result in real world performance improvements of 2-3x and SSDs will be much faster with PCIe4.0.

cperciva said 13 days ago:

The m5 instance family was announced in 2017, IIRC.

threeseed said 13 days ago:

Sure but the Elastic Fabric Adapter is only on the top tier of instance types.

Hoping it trickles down for us normal people.

cperciva said 13 days ago:

Oh, I thought you were talking about how EBS disks are presented as NVMe even though they run over the EC2 network fabric.

poletopole said 13 days ago:

Wasmer is a project I'm watching closely for many reasons. I feel as WASM becomes more commonplace the role Wasmer plays will become clearer.

sidcool said 13 days ago:

Rocket propulsion tech. Not just because of SpaceX, but I really hope we develop newer and more efficient propulsion techniques.

sq_ said 13 days ago:

I'm definitely excited to see how the VASIMR tech being developed by Ad Astra pans out, and whether anybody manages to build a functional nuclear thermal rocket. Hopefully the new super-heavy-lift capacity that's expected in the coming decade will help to enable the groups working on those and other designs.

blackrock said 13 days ago:

I recently wondered if you can take your saliva, or blood, and get a diffraction scan of it.

Kind of like a spectrograph. You’d feed this into a machine learning system, and it would match up your pattern against a known dataset.

This might allow for faster identification and recognition, of known viruses and diseases.

Especially if the technology can identify the virus from just your spit.

azureus said 13 days ago:

We're working on this, its fun, and already possible at lab scale. Check out Surface Enhanced Raman Spectroscopy (SERS).

And we've got a huge shot in the arm with covid. At consumer scale the hard problem is skipping any kind of sample prep. Requires unrealistically high sensitivity - we're trying to work around this with different no-prep based amplification techniques. There is actually a lot of interesting work happening in this space.

Further, currently the substrates use gold to enhance the raman scattering of the incident light. As you can imagine, this can get pretty expensive. There has been some success in using low cost paper/inkjet, textile/dye based approaches to introduce gold nanoparticles onto more familiar and easily mass produced substrate to achieve the same effect.

SERS is pretty cool, hasn't got its due and I think its going to be one of the underdogs in the diagnostic/detection market in the coming years.

Whats especially cool about it is - it doesn't restrict you to one thing - only genomes, only proteins etc.

You may need a different strategy for different pathogens, detecting the a gene or cell wall for one, a synthesised protein for another, or even the waste product / metabolite to check if a reaction took place - its not a hammer looking for a nail.

koeng said 13 days ago:

Current methods can pretty easily identify the virus from just your spit (in fact, that's a protocol that is pretty widely used for different applications). The biggest problem is that normally you need to concentrate the virus in order to get detectable range for earlier days of infection. Seems like the range is approximately ~2 days earlier with concentrating vs spit only, which is actually significant.

All diseases (other than prions) are just nucleic acid, so that's already possible. Nanopore sequencing is the future and will eat most other technologies lunch.

azureus said 13 days ago:

We're a nanopore sequencing shop. Couldn't agree with you more that nanopore generally is the future, not just for sequencing. Can't wait for ONT's solid state nanopore flow cells - you may get flowcells that go 2/3 times as long then.

But then for most things .. the problem is the prep, not just because the whole portability thing goes for a toss. Yay, great the sequencer is the size of a USB drive, but the rest of the lab isn't :/

More worryingly the biggest and most stubborn cost is now the prep, not the sequencing.

As you correctly pointed out - need to find a way around amplification - then both of these problems above go away if you can do direct PCR free sequencing.

The other less mentioned problem related to the above is also the need for parallelisation - some of those ultra low costs you read about can only be realised when you sufficiently multiplex your samples. For instance, its about 100 USD per reaction for ligation (last step of prep before the sequencing starts), you generally wait till you're sequencing atleast 12 samples in the same reaction so that you're paying <10$ per sample, not 100$ per sample, which is obviously insane.

koeng said 13 days ago:

The key is to do barcoding at the PCR amplification step. That way, you can get away with barcoding hundreds of samples in a single tube.

Really the prep screws it any other way. Is ONT coming out with solid state? Got a reference?

azureus said 13 days ago:

Solid state nanopores: still in research: https://nanoporetech.com/how-it-works/types-of-nanopores

I'm pretty sure they will come out with it. The protein nanopores were the first wave of nanopore research - its tried tested and stable now so they stick with it.

A few years after they launched, the first solid state nanopores were being demonstrated in the lab. Commercialising solid state nanopores seems to be easier if anything than protein nanopores because they slot right into silicon fabrication.

On PCR/barcoding .. Yeah, thats right - do it right in the PCR step .. sometimes we avoid it, if we are not yet sure about the protocol. I think what I meant to say is that the full promise of nanopore sequencing for me is achieved only when you can skip having to amplify/multiplex/barcode - just extract dna, wash, add sequencing adaptors and go - for almost anything.

I think the way they are talked about people generally come in expecting that TODAY .. they think they can literally stick a single sample with no prep and get 1gb of sequencing done for 10$ in an hour. I've seen quite a lot of that. (even from people with PhD's :) )

So yeah its more that the minute you go PCR, you're in for a minimum 20$ per sample, often its the highest cost line item in your whole process.

If you're doing things like 16S metagenomics, you get sequencing at 2-3$ and prep at 10 times that, starts to feel "wrong" after a while, if you get what I mean.

Why're trying everything we can to make sure we're running at full capacity so that we can always give low prices even if its single samples.

Also seeing if we can reduce prep cost with microfluidics/mems - they have voltrax for this, but there are a few other vendors in the market. That has the positive knock-on effect of also reducing labour cost.

umvi said 13 days ago:

Sounds suspiciously like Theranos

blackrock said 13 days ago:

Hah! I didn’t think of that. What kind of fraudulent product was she selling anyways?

sergiotapia said 13 days ago:

spit in = diseases.json out

chrischen said 13 days ago:

1. Functional strictly typed programming patterns. It's hard to say if functional languages themselves will get adoption, but we definitely see functional patterns being used more and more in languages like Javascript, and being pushed in things like React/Redux.

2. Graphql/service-based architectures

bionhoward said 13 days ago:

The jargon of FP is crazy though! How do you learn all of that?

darksaints said 13 days ago:

There are two subcommunities in the world of functional strictly typed programming languages. The Haskell camp is where you get the mind bending jargon and ivory tower ideas. Luckily, there is another camp that eschews this head-in-the-clouds thinking and sticks to practical matters first and foremost. You'll want to look for the ML family of languages: SML, Rust, Ocaml, F#, and Scala (my favorite of them all, but for some reason some people are trying to turn it into Haskell on the JVM).

stanislavb said 13 days ago:

It's definitely Elixir/Phoenix/Phoenix-LiveView. I'm even planning on using that tech stack in an upcoming project.

lpaone said 13 days ago:

I am very interested in new energy solutions as a way to de-carbonize and provide consistent supply of low cost energy.

1. Green hydrogen production and fuel cells. We are just scratching the surface of green hydrogen production. Hydrogen can be the energy carrier we need in the various use cases where batteries are not viable.

2. Nuclear SMRs. Definitely something that is more of a "something to watch".

3. Pumped hydro. The longest lasting, highest capacity, lowest cost, 0 carbon, grid-scale energy storage solution. I have been closely follow a company I found on HN call Terrament. https://www.terramenthq.com

keithwhor said 13 days ago:

If you have time this long weekend, the team behind Autocode (Standard Library) [0] is looking for feedback. We launched a couple months ago here on HN and have been eating up community responses. :)

tl;dr is: we provide the entire development stack for API integration on rails. If you've ever wanted to ship some quick webhook or API integration logic but have found Zapier too limiting but spinning up an entire dev stack overkill, Autocode fits cleanly in between both. In-browser IDE, API autocomplete, a drag-and-drop UI that generates code for you, version control, revision history, cloud hosting for your APIs. Takes a minute or two to ship a webhook from scratch.

Disclaimer: Am founder. Am also happy to hear questions, thoughts, anything!

[0] https://stdlib.com/

iameoghan said 13 days ago:

I've been using Autocode on & off for a while. Thanks for the reminder to recheck it.

keithwhor said 13 days ago:

No problem! We just released major updates this week. :)

KhoomeiK said 13 days ago:

Deep Learning driven NLP. We've seen massive advancements in research, but from personal experience working with a few startups, these new forms of NLP are just beginning to hit the market now (most companies are still using Classical NLP techniques like part of speech tagging etc). It's a huge space and I can't wait to see its use cases expand.

Brain-Computer Interfaces.

Augmented Reality. As someone in this thread mentioned for self-driving cars, I think the hype cycle for AR is in the right spot for us to begin seeing real advancements in the next couple years, especially with Apple's recent announcement.

entha_saava said 13 days ago:

Svelte

Flutter

Zig, Nim & Crystal programming languages

Please.build (bazel clone in Go)

GraalVM's native image and CoreRT for .NET (though not much is heard about progress on CoreRT)

akudha said 13 days ago:

I've wanted to try mobile programming for a while (I am a web dev). Is Flutter a good choice for a mobile beginner like me, who hasn't done any mobile programming at all?

entha_saava said 13 days ago:

I'd say yes, although learning dart maybe take a week or two in worst case. Flutter, as opposed to react native, is quite easy to set up. The declarative UI paradigm is nice.

akudha said 13 days ago:

one two weeks is not bad at all for a new language. Thank you for answering.

I hope Google doesn't lose interest in Flutter and shutter it.

Any thoughts on using Flutter for the web?

entha_saava said 12 days ago:

Haven't used flutter for web. Apparently it uses canvas. Ok for getting shit done in some cases.

mjirv said 13 days ago:

1. Fishtown Analytics - makes dbt, a sql data modeling tool that has really caught on in the analytics world over the last year or two

2. Bubble - no-code!

3. Stripe - already big but has the potential to be the next Google/FB/MSFT etc

carapace said 13 days ago:
jdub said 13 days ago:

Honeycomb will hopefully (continue to) upend the "logs, metrics, and traces" world.

https://honeycomb.io/

thetwentyone said 13 days ago:

Programming languages count as technology, right?

I'm really excited for what Julia is doing - making a really nice ecosystem of modern, dynamic, high performance code.

iameoghan said 13 days ago:

Absolutely.

tmaly said 13 days ago:

I think 3d printing still had enormous potential.

They are printing jet engine parts with it these days.

holler said 13 days ago:

when can I print a hamburger? that’s when I’ll know we have made it to the future!

tmaly said 10 days ago:

I think something like this is already in the works. But I think the food replicators from Star Trek Next Generation would be better is we had the tech.

say_it_as_it_is said 13 days ago:

Was bitcoin or blockchain mentioned on HN back when it wasn't on many radars?

coderintherye said 13 days ago:

It was being talked about as early as 2010: https://news.ycombinator.com/item?id=1704924

I remember a lot of discussion in 2011 on it.

javert said 13 days ago:

I don't remember exact dates, but I can confirm this. I heard about bitcoin fairly early on from this site.

iameoghan said 13 days ago:

Would love to know the answer to this too..

zadler said 13 days ago:

AI assisted code completion.

darksaints said 13 days ago:

We have had AI-assisted code completion for a long time now. It used an obscure and esoteric form of Symbolic AI better known as type systems.

elevenoh said 13 days ago:

Student at UWaterloo killed it: https://www.theverge.com/2019/7/24/20708542/coding-autocompl...

I use this ~50% of the time.

bayesian_horse said 13 days ago:

Synthetic biology. Microalgae (food, biofuel, Co2 sequestration).

cinquemb said 13 days ago:

Brain computer interfaces and related research papers/techniques

codeisawesome said 13 days ago:

I'd like StarLink (or something else like it!) to succeed increase internet adoption massively around the planet.

vbezhenar said 13 days ago:

Rust. It's an extremely interesting language for me, I'm trying to learn bits of it every few years and while I still don't have any real tasks, I just love its development. May be it'll be the last and only programming language that everyone will use for the foreseeable future.

signaru said 13 days ago:

Dot Net Core (C#/Winforms) compiling to native code.

ReactOS/Wine. Lately I'm getting worried about where the Windows OS seems to be headed. ReacOS slowly catching up, but recent developments seem to be promising. There's still many things I need that are not multi/cross-platform.

kiwicopple said 13 days ago:

If anyone is particularly brave, we are a new platform which is like Firebase, except it's built with Postgres:

https://app.supabase.io

We are are essentially rebuilding [DabbleDB](https://en.wikipedia.org/wiki/Dabble_DB) for the UI, and you get a bunch of middleware which is auto-generated: REST APIs, Realtime (CDC), auto-updating documentation. Also we will tackle some of the more difficult tasks for Postgres like replicating your database close to your users.

Also we're opensource! https://github.com/supabase/

npv789 said 13 days ago:

implement graphQL would be nice

chrdlu said 13 days ago:

Circle.so (https://circle.so/)

With a rapidly growing creator economy, the tooling around building custom communities is very far behind. The cutting edge is a Facebook group, Slack channel, or a Discord server.

hanniabu said 13 days ago:

Would totally use it if they had discord login integration so it'd be easy to migrate our community over.

randtrain34 said 13 days ago:

Deno.land

lukevp said 13 days ago:

Have you given it a try yet? I LOVE TypeScript and think the concept is really cool, but the compatibility story for NPM packages needs to be fleshed out somehow. Otherwise I fear it will fall into the same fate as Python 2 to 3.

m101 said 13 days ago:

www.CloudNC.com

Basically, 1) bringing down the cost of CNC machined parts down to their marginal cost through automation, 2) reducing that marginal cost through higher machine utilisation rates, and 3) reducing turn around times and accuracy of parts to clients.

enginoor said 13 days ago:

ProtoLabs is already successful in this space. I wonder what competitive advantage CloudNC has?

m101 said 12 days ago:

They are trying to automate the g-code generation that controls the CNC machine. At the moment a human operator uses an intermediary piece of software to create that.

sammnaser said 13 days ago:

Tailscale.

reinhardt1053 said 13 days ago:

I wouldn't trust such 3th party service. It's better to host everything yourself and for free: https://github.com/subspacecommunity/subspace

derekja said 13 days ago:

nice, but what is the benefit over zerotier? Seem to provide very similar end results..

mech422 said 13 days ago:

Slick - I was wanting to try tinc + wireguard, might have to try this as well.

Thanks for the heads up!

iameoghan said 13 days ago:

Oh wow, that looks cool AF!

idoby said 13 days ago:

Quantified self tech and personalized medicine tech. Stem cell stuff. Genetic therapy stuff. We're moving from an age where a doctor was a static map of symptoms => treatments to an age where far more personalized data and processes are considered. The paradigm is also slowly shifting on being able to reverse rather than just avoid or prevent certain conditions and situations.

The level of sophistication is already making some doctors feel obsolete, by their own admission to me. If we don't get to live in exciting times, our children and grandchildren surely will.

said 13 days ago:
[deleted]
treelovinhippie said 13 days ago:

Holochain: https://developer.holochain.org

When you eventually grasp it, makes blockchain look like we took took a wrong turn in 2008.

WookieRushing said 13 days ago:

I took a quick look at Holochain and its got the regular set of better than bitcoin/ethereum claims that most alt coins say. Its saying its more efficient and safer to use than other blockchain languages. Now these are really big claims so let's see what its got!

Looking at https://holochain.org/ , most of my scam senses are not going off tooo much. Its got some weird testimonials and then a white paper! So far so good.

Ok, lets skip to the white paper. Now what I'm looking for here is mainly how do you verify computations are valid amidst BFT and sybil attacks.

So its got some stuff about how every message received can be verified by the receiver by using "validation rules". Okay... so we can use custom validation rules that each receiver can define and run themselves. Fine, one such rule could be Bitcoins proof of work.

So it can be as expensive as Bitcoin. Now of course there can be other rules that are less expensive of Bitcoin, but theres a big reason Bitcoin's PoW is so expensive... Its been battle tested and looked at by 1000s of people to verify that its correct and can resist just about anything up to a 51% bad node attack. Allowing any program to define its own set of validation rules in the hopes that they will be faster doesn't make things safer. It just makes it more likely for fails. This looks like the major contribution that Holochain is trying to make. Let everyone write their own proof of work functions that suits their needs and mess it up. The number of ethereum dapps that failed to write safe contract is proof enough that this will happen in an identical fashion.

Did I miss something? Maybe there really is something here thats new, but I'm not seeing it at first glance. Its doesn't look a scam though so its got that going for it

treelovinhippie said 13 days ago:

If you're coming from the blockchain scene you really won't get it at first glance. I first started mining Bitcoin in 2010 at $0.50 each, then went heavy into Ethereum from 2014. It took me a few weeks of deep-diving to unlearn a lot of the conditioning imprinted in the blockchain bubble.

Holochain has no mining, no staking, no core token, no fees, no global consensus, no global ledger, it's not a platform, and there's no possibility of a 51% attack.

It's more akin to a P2P protocol/framework/pattern where users store their data and run apps locally, and where each app is its own private distributed network. Underpinning all of that are cryptographic counter-signing events and immutable hash chains.

In addition they have a parallel project called Holo (yeh confusing) that acts as an _optional_ hosting bridge for Holochain apps to offer a simple UX for normal web users. With Holo, developers can pay a distributed network of hosts in HOT (this is the coin you see on exchanges) to serve their happ/dapp like any regular website via Cloudflare DNS. No browser extensions required, nor any need to buy crypto to interact with Holochain apps. HOT for now is an ERC20 but will swap for a Holo mutual credit currency in the near future.

Sidenote: when you grasp mutual credit cryptocurrencies you'll also see all traditional cryptocurrency tokens as nothing more than speculative gambling chips.

This is a pretty good Holochain intro podcast if you're coming from the blockchain scene: https://soundcloud.com/arthurfalls/holo-mixdown

Also checkout HoloREA and REA accounting (resource-event-agent). This is a good podcast on it with some mates of mine; we all worked at ConsenSys with a longer history in Ethereum before coming to the difficult realisation that Ethereum was the perpetuation of everything wrong with the global economy: https://soundcloud.com/user-376287461/holochainpodcast-2-pos...

said 12 days ago:
[deleted]
buboard said 13 days ago:

Anything to do with Genomics - DNA sequencing costs less than $200 nowadays. Info-tech doesnt have much more to offer with humans being as faulty as they are. Biotech is the next frontier

Findeton said 13 days ago:

Light-field technology. I believe its time has come. Actually I'm working on creating a cheap light-field camera and the pipeline from the video processing to the video player.

interestica said 13 days ago:

3D Bioprinting : corneas.

sidhanthp said 13 days ago:

https://www.letsdeel.com - super easy payroll for remote teams. Onboarding / payments is awesome.

peralp said 13 days ago:

Sysdig [1] great monitoring platform for containers on K8S

https://sysdig.com/

edoo said 13 days ago:

Jeeva wireless has been on my radar for years now. Wifi at 1/1000th of the normal power. They were supposed to be at market by now with $0.50 transceiver chips. Last I heard they made a cell phone that didn't require batteries. https://www.jeevawireless.com/

KerryJones said 13 days ago:

1. Wing - Drone Delivery (https://wing.com/)

2. Nanorobitics for dentistry (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3723292/)

jatinshah said 13 days ago:

zk-S[NT]ARKs and related zero knowledge tech along with scalability improvements to Ethereum is poised to revolutionize financial tech. It will take decades to play out much like consumer PC and internet and we are still in very early stages.

But when it really picks up the impact will be as big as when Dutch invented modern finance in early 1600s.

imsofuture said 13 days ago:
zubairlk said 13 days ago:

No/low code platforms such as bubble.io

iameoghan said 13 days ago:

Absolutely. I enjoy Bubble - should have put it on the list.

Any other platforms that have caught your attention?

mab122 said 12 days ago:

More IPFS applications, integrations and services build on it.

As for "competition" like scuttlebut. I would love sneaker-mesh -net type mobile killer app that would actually work.

Hell I would even mind if facebook implemented something that would work when centrilized infrastructure fails.

jppope said 13 days ago:

Ginkgo bioworks, Oxidize, WASM (& Deno), Serverless via Isolates (e.g. Cloudflare workers), Neurolink, OpenAI

elevenoh said 13 days ago:

Life extension interventions.

We're just beginning to see the dam break on funding life extension R&D.

said 13 days ago:
[deleted]
niftylettuce said 13 days ago:

I wasn't impressed with the engineering and JS client-side bundles on Integromat.

statictype said 13 days ago:

What was wrong with it? And why did you need to interact with it at all, I'm curious.

niftylettuce said 13 days ago:

A customer wanted me to help them integrate the API I built for https://forwardemail.net with it. They have client-side bundles that threw errors and the pages rendered blank.

michaelbrave said 13 days ago:

My list would be:

1. Swift - you can mix functional and object oriented code in a way I've not seen anywhere else. It's also going to be multi platform including windows in the next version and it's making inroads with Swift-Tensorflow. I can see a lot of really cool things coming from this once it's multi platform.

2. Jai Language by Jonathan Blow. I'm not sure when it will come out but what's been shown looks promising, a game specific language could cause other innovations that could later carry over to other languages.

3. Next Gen Consoles. X-box Series X and PS5 are both doing some cool things with memory management, SSD's and GPU's. Many of these innovations will make it to PC's later.

4. New Email features (superhuman and HEY - by DHH) It seems like innovation is finally actually happening in this space.

5. Game Engine Innovations. Hammer 2 has some really cool UI for level design, Unreal 5 has some great lighting and handling of 3D, ID tech is using 3D decals to cool effect while not being expensive. A lot of the technology happening in games will spill over into other areas, Unity is doing stuff with the automative industry, Unreal with Architecture.

6. AI in use of Making Art. A good example is Unity's new artengine (artomatix). https://artomatix.com/

7. Generative Design for engineering.

8. Dreams on ps4 - How quickly people can make something in this is amazing, if it ended up on PC or VR it could change everything.

9. AR as a tool for creators more than as a tool for consumers. 3D interactive CAD like Iron Man is more exciting than a game that makes you dizzy.

modeless said 13 days ago:

+1 for Dreams; it's revolutionary. Their rendering technology is black magic. It deserves far more attention than it will ever get trapped on PS4 with a $40 price tag. Dreams on PC with a free-to-play model and real-time collaborative editing would be the next Minecraft/Roblox.

alasdair_ said 13 days ago:

Node Red is nice. I used it today inside of Home Assistant to automate some stuff.

darksaints said 13 days ago:

SeL4, Nix/NixOS, and 1ML.

_____smurf_____ said 13 days ago:

https://endrainc.com/ This technology can have a huge impact on people's lives (specially in the global south).

machinesbuddy said 13 days ago:

About [4]: The dumbest idea I've ever seen is to provide a platform to build software without writing code.

You make the job 1000x harder to prevent a few lines of code! Just make the coding part easier.

machinesbuddy said 13 days ago:

Take PlantUML sequence diagrams for example. Which one is easier? Drag-drop, fix, etc or just few lines?

https://plantuml.com/sequence-diagram

vaibhavthevedi said 12 days ago:

I had hopes with MagicLeap but then it's going through a roller coaster ride. Still they are on my "to watch" list.

Things like Apple AR glass leak makes me hooked to AR and VR.

ayushgp said 13 days ago:

+1 for Hasura. It's such a pleasure to use. Having a configurable backend with so much fine grained Authorization. It's just awesome. It literally cuts your project time in half.

nuclid said 13 days ago:

https://resistant.ai looks pretty sci-fi. They basically protect AI systems from AI-enabled attackers.

vijayshankarv said 13 days ago:

https://hash.ai/

Hash is a platform for simulation and I think this kind of stuff will become increasingly important.

said 13 days ago:
[deleted]
Awtem said 13 days ago:

Homomorphic encryption.

_theory_ said 12 days ago:

Electric VTOL for the masses: basically flying cars.

https://www.agilityprime.com/

mudge said 13 days ago:
leke said 13 days ago:

The one that has me most excited on OP's list is Strapi. It's the only one that I see myself using in the very near future.

iameoghan said 13 days ago:

I really like it.

My one criticism is that the docs are always slightly out of date. I would love more flexibility in the queries i.e. being able to do advanced queries on multiple collections, rather than having to resort to raw SQL. It'll get there I'm sure.

alasdair_ said 13 days ago:

Augmented reality. With every major tech company working on it, the next fee years will be interesting.

yters said 13 days ago:

Applying intelligent design theory to bioinformatics and AI. A lot of untapped potential IMHO.

mister_hn said 13 days ago:

Rust programming language for its claimed safety

Stripe for payments

Kubernetes for cloud services and K8S on raspberry pi clusters

n_t said 13 days ago:

Unikernels - seems promising and yet ecosystem is not there. I think it's matter of time.

alphast0rm said 13 days ago:

Ethereum becoming the value settlement layer of Web 3.0 [1]. Stablecoins have proven to be the killer dApp and there are ~$10B in circulation currently [2].

[1] https://ethereum.org/

[2] https://stablecoinstats.com/

dudus said 13 days ago:

10B Distributed Monopoly dollars.

said 13 days ago:
[deleted]
data_ders said 13 days ago:

dbt (data built tool). Brining SWE best practices to analytics engineering. about damn time!

diehunde said 12 days ago:

- Anything in the NVRAM space

- Modern database companies such as Cockroach Labs, Couchbase, MemSQL

- Hashicorp

nkg said 13 days ago:

Machine Learning. I want an Alexa that would know how to learn and extend itself.

said 13 days ago:
[deleted]
pedalpete said 13 days ago:

Soft-EEGs

inference AI (signed up for the Google Alpha, but also looking at Elastic)

Sleep research as a generality

pot8n said 13 days ago:

eBPF

fortran77 said 13 days ago:

Push-to-talk "walkie-talkie" style audio. It's very handy, bit it's a learning curve and 20-somethings today hate talking to people. I think it could catch on (again) eventually.

see the "CB Radio" craze of the 1970s.

brainzap said 13 days ago:

Javascript libraries that compile instead of shipping a runtime.

rajaravivarma_r said 13 days ago:

Is there any technology/start up trying to cure baldness?

setudeora said 13 days ago:
Ken_Adler said 13 days ago:

My current favorite Shiny new thing: www.Grain.co

caogecym said 13 days ago:

Neuralink - reduce friction of verbal commutation

said 13 days ago:
[deleted]
CareyB said 13 days ago:

Energy generation, and storage.

fastbmk said 9 days ago:

FreeBSD

mandown2308 said 13 days ago:

Neuralink

bra-ket said 13 days ago:

Datadog

caogecym said 13 days ago:

Yep, they are great! Using them for my cloud version HTTP client - https://ihook.us

frostcs said 13 days ago:

6

grahamg said 13 days ago:

comma.ai - They produce the aftermarket hardware for openpilot. It's a open source driver-assistance system that performs the functions of Adaptive Cruise Control (ACC) and Automated Lane Centering (ALC) for compatible vehicles. Many car vendors already do this, but based on some footage it requires less user intervention.

mindfulplay said 13 days ago:

The most dangerous type of technology: putting silicon valley mindset to a critical, deadly torpedo is not just risky but callous.

Falling to these (hotz/musk) people's perceived intellect as somehow determining the success of self-driving cars is a bit disingenuous.

stevekIabnik said 13 days ago:

> it's a great alternative for almost any Golang use case

This is bad shilling even by rust evangelism strike force standards.

stevekIabnik said 13 days ago:

Looks like nix shills are outnumbering rust shills. Pretty bad for us.

rstorr said 13 days ago:

holochain.org & codelingo.io

x3haloed said 13 days ago:

Uh mine. Duh.

jhoechtl said 13 days ago:

Contact me in private so I can charge you my valuable advice for profitable investment.

x_stealth said 13 days ago:

We are in stealth for a while. And our demos have been getting WOWs.

For early access here : https://bit.ly/36mEU6Q

choonway said 13 days ago:

What I'm watching is not on anyone's list.

iameoghan said 13 days ago:

....

choonway said 13 days ago:

If you have to know, it's the engineering equivalent of trying to divide by zero in Math.

yters said 13 days ago:

An invention invention?