Sina ise pead endasse alati kõige rohkem uskuma.
lk 204
Mida edukam sa oled, mis iganes valdkonnas, seda rohkem negatiivset tagasisidet tuleb.
lk 202
That’s part of the problem; we don’t know what failure models are being introduced and how deep they could extend.
lk 211
Asjad, millele sa ütled “ei”, peaksid olema sellised, mille puhul tunned, et need on allpool sinu oskuste, teadmiste või rahaliste ootuste taset.
lk 195
AI is both valuable and dangerous precisely because it’s an extension of our best and worst selves.
lk 210
Olümpiavõitjaks tõesti igaüks ei saa, aga amatöörina on kõigil võimalus särada.
lk 182
Even if it was fully aligned with human interests, a sufficiently powerful AI could potentially overwrite its programming, discarding safety and alignment features apparently built in.
lk 209
AI safety reacheres worry (correctly) that should something like an AGI be created, humanity would no longer control its own destiny. For the first time, we would be toppled as the dominant species in the known universe.
lk 209
Tasub endale panna näiteks reegel—eriti kui lisatuluallikas on teenusepõhine—et pärast igat viiendat või kümnendat klienti tõstad natuke hinda.
lk 130
Massively omni-use general purpose technologies will change both society and what it means to be human
lk 203
Nendest kientidest, kelle kõige suurem nõudmine on toote odavus, on mõistlik kohe lahti saada.
lk 122
Everywhere you look, technology accelerates this dematerialization, recusing, complexity for the end consumer by providing continuous consumption services rather than traditional buy-once products. … The question now becomes, what else could be made into service, collapsed into the existing suite of another mega-business.
lk 189
…, et tunnihind, mda sa lisatuluallika pealt teenid, võiks olla vähemalt sama—kui ajakulu mõttes mitte suurem—kui see, mida sa oma palgatööl teenid, sest muidu on see ebamõistlik.
lk 121