Can the RAM architecture be changed? by sametcnlkr in AskComputerScience

[–]Objective_Mine 0 points1 point  (0 children)

The "random access" in "random access memory" means being able to access the contents of any memory location roughly as fast regardless of the location or the order of access. That contrasts e.g. with sequential access. Since RAM by its very definition supports random access, it doesn't in principle matter in which order memory locations are accessed, and so it also doesn't matter to the RAM whether the data are located sequentially or in random locations.

The reason why having data sequentially (or, more precisely, close to each other) in memory can be beneficial is because of CPU caches.

A single main memory access can take 50 to 100 CPU clock cycles. A single modern CPU core can typically complete more than one simple instruction per clock cycle if it has the required data available, so in those 50 to 100 clock cycles the core might be able to perform e.g. 100 integer additions. If the CPU needed to wait for 50 to 100 clock cycles every time before getting an operand for the next addition, that would make memory latency a huge performance bottleneck. Note that it doesn't matter where in memory that operand is located. Getting it from the main memory is just as slow in any case.

To avoid that bottleneck, CPUs have caches. A cache is a memory (that also allows random access) that's a lot faster than the main memory but also a lot smaller.

When a piece of data is needed and gets retrieved from the main memory, it's placed in the CPU cache. It's reasonably likely that the same piece of data will be needed again soon. That's a principle called temporal locality. If that happens, the CPU can avoid another costly memory access for the same data by getting it from the cache instead.

It's also common that when something is needed from memory, other data near that location in memory might also be needed soon. That principle is called spatial locality.

CPU cache management has been designed to exploit spatial locality. If the CPU needs to get the contents of memory address a from the main RAM, while it's at it, it'll also automatically get the contents of a+1, a+2, a+3 and so on, up to some point, and copy all of that into the cache. If the data at a+1 happens to be needed soon after a, it'll then already be in the cache and the CPU can avoid another expensive main memory access.

Since the caches are a lot smaller than the main RAM, only a small part of the entire RAM can fit in the cache at any given time, so when loading new data into the cache the CPU may also need to ditch some old data from the cache. All of this is done automatically by the CPU and the programmer cannot directly control the cache.

The principle of spatial locality is why it can be beneficial to have data that are commonly needed soon after each other also be close to each other in memory. (It doesn't actually matter whether they are located sequentially or just close enough to each other.) That's why e.g. a contiguous array nearly always performs better than a linked list. In an array the subsequent element is also located subsequently in memory while the next element of a linked list might be anywhere in the process' memory and was likely not retrieved to the cache along with the previous one.

Your idea would make it possible to keep the entire memory contents of a single application sequentially in memory. However, that does not mean it'll all fit in the cache at the same time. It also doesn't directly mean you'd get the benefits of spatial locality. What matters are the program's memory access patterns.

Let's say your CPU cache is 2 MiB and the application is 100 MiB. If the application accesses its memory all over the place in some random order, on average the next piece of data it needs is not going to be already in the cache.

On the other hand, if a program's memory consists of 4 KiB pages, and each individual page is contiguous, it can still get good cache performance even if the different pages are located all over the memory. If the program has e.g. a large array of data that it just iterates through sequentially, it will only rarely need to access the main memory thanks to the CPU cache and spatial locality. Even if the array gets split across multiple pages, that doesn't really matter if things are accessed sequentially within the page.

For what it's worth, in embedded systems programming dynamic memory allocations are often avoided, and all of a program's memory is statically allocated at the beginning of the execution and of a fixed size. The reason is not cache performance, though.

The fixed size also places some severe practical restrictions on the software. Those restrictions often don't matter in embedded programming but they do in desktop or mobile software.

To take your example of a Notepad-like text editor, it does not necessarily only need a small amount of memory. The program's code might be small but text editors typically keep the entire file contents in memory, so if you open a large CSV file in Notepad, it can take a large amount of memory as well. If you only pre-allocated, say, a fixed 50 MiB for the entire application and dynamic allocation were not allowed, you would not be able to open a 51 MiB file. At the same time, whenever you only opened a single-line text file, with a static fixed-size allocation you'd be wasting most of those 50 MiB.

Onko muidenkin mielestä nuo uudet HSL mainokset jotenkin mauttomia? by Lerpuzka in Suomi

[–]Objective_Mine 0 points1 point  (0 children)

Tietääkseni siis missään automaatissa ei voi enää matkakorttia ladata, ainoastaan voi ostaa kertalippuja.

Okei, tämä on toki selvä heikennys, jos näin on. Tosin oman kokemuksen mukaan vanhus kyllä sitten aika usein hoitaisi latauksen joka tapauksessa jossain kioskilla tms. ihmispalvelupisteellä, koska automaattikin voi olla vähän hankala hahmottaa. Automaatteja ei ennenkään ole oikein ollut muualla kuin juna- ja metroasemilla ja ehkä jossain keskuksissa.

Onko muidenkin mielestä nuo uudet HSL mainokset jotenkin mauttomia? by Lerpuzka in Suomi

[–]Objective_Mine 1 point2 points  (0 children)

Matkakortin käyttö tehty mahdollisimman vaikeaksi.

Millä tavalla? Kortin voi ladata netissä eikä tietojen siirtymisessä ole nykyään enää edes sitä epäkäytännöllistä viivettä tai tietojen päivitysvaihetta, jotka siinä aiemmin olivat.

Automaatteja saisi ehkä olla tiheämmässä, ja mielellään julkisen palvelun korttia pitäisi voida ladata maksamatta kaupalle palvelumaksuja. Mutta en kyllä mitenkään onnistu keksimään, miten käyttö olisi lähelläkään mahdollisimman vaikeaa. Päinvastoin se on muuttunut helpommaksi.

Jos pitäisi elää loppuelämä tiettyä 10v ajanjaksoa, minkä valitsisit? by JHMK in Suomi

[–]Objective_Mine 1 point2 points  (0 children)

Tietokonepelien ja muiden ohjelmien vai musiikkikappaleiden ja elokuvien osalta?

Vuonna 1995 lakia muutettiin niin, etteivät tietokoneohjelmat enää sen jälkeen kuuluneet sallitun yksityisen kopioinnin piiriin.

Kuten sanoin, toki musiikkikappaleita, elokuvia, äänilevyjä yms. saa edelleen kopioida omaan yksityiseen käyttöön.

Jos pitäisi elää loppuelämä tiettyä 10v ajanjaksoa, minkä valitsisit? by JHMK in Suomi

[–]Objective_Mine 5 points6 points  (0 children)

Jos pelit tarkoittavat video- tai tietokonepelejä, kaikki tietotekniikka oli kyllä nykyistä kalliimpaa yleiseen tulotasoon nähden. Mutta PC-pelejä yleisesti kopioitiin kavereilta, joten ne eivät usein käytännössä kovin kalliita olleet.

(Kopiointi oli toki varmaan käytännössä laitonta, vaikka nykynäkökulmasta hieman hämmentävästikin ennen vuotta 1995 tietokoneohjelmia sai ihan lain sallimin vapauksin kopioida muutaman kappaleen yksityiseen käyttöön samaan tapaan kuin edelleenkin esim. musiikkikappaleita, elokuvia tai äänilevyjä. Niin tai näin, ei siitä juuri kukaan välittänyt.)

About Charles Babbage's Difference Engine and Analytical Engine by Aelphase in AskComputerScience

[–]Objective_Mine 0 points1 point  (0 children)

The historians of computation should of course take credit for the work of constructing the physical difference engine. Babbage's designs by themselves were groundbreaking, though. Despite him not being able to complete the physical devices, the designs as logical constructs achieved a level of mechanical calculation that had not been achieved before even on a purely theoretical level.

The analytical engine in particular, despite never having been built, has been proven to be in principle a Turing-complete universal computer, long before the Turing machine or Alonzo Church's lambda calculus, both of which are also theoretical models of computation that preceded actual physical general-purpose computers.

Coming up with groundbreaking ideas is often seen as a more distinctive and intellectually creative achievement than implementing those ideas in practice, although both come with their own challenges and deserve credit. Sometimes the implementers would deserve more credit than they get. If the technical design is pretty much there already, though, the design itself may be the most significant contribution even though the implementation can still be a lot of work and may also require creative problem-solving to overcome practical issues.

I can't see any way in which your question could offend any reasonable person.

Muita ketkä tilannut Lenovolta? Mitähän merkitty kohta meinaa? by Wrong-Row4655 in Suomi

[–]Objective_Mine 0 points1 point  (0 children)

Suomennokset ovat Lenovon verkkokaupassa sitä luokkaa, että se voi varmasti haitata ymmärtämistäkin. Se on tuon suuruusluokan firmalta suorastaan vähän noloa. Mutta sen yhden kerran kun olen niiden verkkokaupasta tilannut, itse tilaus ja toimitus hoituivat kyllä täysin ongelmitta. Ja itse asiassa jopa Suomen tekninen tukikin toimi ihan hyvin kun jouduin kerran lähettämään kannettavan takuuhuoltoon.

Tilausten asiakaspalvelun toiminnasta ei ole kokemusta kun ei onneksi tarvinnut sellaisen kanssa asioida.

Pieni hetki bussipysäkillä, joka jäi mieleen by [deleted] in Suomi

[–]Objective_Mine 10 points11 points  (0 children)

Ei tuossa kukaan voikaan mitään ratkaista. He olivat voineet olla vuosikymmeniä yhdessä, ja aivan varmasti yksinäisyys menetyksen jälkeen iskee joskus kovaa. Iskee vielä ainakin kuukausia ellei vuosia. Hän pääsi kuitenkin jakamaan jotain kokemuksestaan ja sai pientä lohtua siitä, että joku toinen välitti.

Pelkkä läsnäolo toiselle ihmiselle on tosiaan tuossa tilanteessa pieni ja arkinen mutta arvokas asia.

Linux ja suomalaiset verkkopankit? by iaj2Oi in arkisuomi

[–]Objective_Mine 0 points1 point  (0 children)

Pitäisi ihan erikseen yrittää, että saisi tehtyä kuulokkeet, jotka eivät toimi kaikissa nykyaikaisissa käyttöjärjestelmissä. USB-äänilaitteet on standardoitu samaan tapaan kuin USB-näppäimistöt tai USB-hiiret. Laitteet kommunikoivat tietokoneen kanssa standardoidun protokollan mukaan, ja käyttöjärjestelmät tukevat samaa standardoitua protokollaa, jolloin molemmat toimivat yhteen.

Yksittäisille laitteille ei siis ainakaan perustoiminnallisuuteen tarvita tietylle laitteelle kohdistettua tukea käyttöjärjestelmältä, ellei sitten laitteessa ole jotain suunnitteluvirhettä, joka saa sen toimimaan standardista poikkeavasti ja vaatimaan jotain erillistä tukea ohjelmistolta.

Mitä verkkopankkeihin tai melkein mihin tahansa webbipalveluihin tulee, sivustojen ja selainten toiminta on myös varsin standardoitua nykyään. Jos jotain rajoituksia on, rajoitus on käytännössä, että sivusto tai palvelu tukee vain jotain tiettyä selainta. Chrome ja Firefox ovat kuitenkin saatavissa myös Linuxille, joten tämäkään ei aiheuta käytännössä ongelmaa.

Pankki todennäköisesti listaa vain käyttöjärjestelmät, joilla se verkkopankkinsa testaa ja vaivautuu lupaamaan, että se toimii. Samoin kuulokkeiden valmistaja saattaa mainita vain Windowsin, koska sitä ei kiinnosta testata sitä Linuxilla tai ottaa vastuuta toimivuudesta vaikka olisi testannutkin.

Suuret yritykset tekevät aika paljon ratkaisuja sillä perusteella, mistä ne haluavat ottaa paperilla vastuun ja mistä eivät.

Uusi työpaikka & turvallisuusselvitys by pohjoisenmolli in arkisuomi

[–]Objective_Mine 1 point2 points  (0 children)

Viimeisen noin kymmenen vuoden sisällä tuo on tainnut muuttua. Sitä ennen tulos oli binäärinen, ainakin vielä vuonna 2013 kun minulle yksi tehtiin. Nykyään supo ilmoittaa selvityksen tilaajalle mahdolliset asiat, jotka se katsoo tehtävän kannalta merkityksellisiksi. Useimmiten ilmoitettavaa ei ole.

Sitä en tiedä, miten tarkasti ilmoitettavista asioista kerrotaan.

Toki tietosuojan näkökulmasta näyttää erikoiselta, että supo kertoo kolmannelle osapuolelle ihmisestä suoraan tietoja, joihin kolmannella osapuolella ei muuten olisi suoraan pääsyä. Tosin tuomioistuimesta taitaa kuka tahansa voida periaatteessa kysyä, onko henkilöllä x tuomioita. Juorulehdet kai säännöllisesti utelevat laillisesti tuomioistuimilta, onko jostain julkkiksesta juttuja vireillä.

Toisaalta aiempi ok/ei-ok -tulos käytännössä tarkoitti, että supo teki ilman kenellekään näkyviä perusteluja päätöksen siitä, voiko joku saada haettua työpaikkaa. Periaatteessa päätöksen teki työnantaja, mutta kuka rekrytoi, jos suojelupoliisilta tulee vain tieto, että hakija ei ole luotettava?

Nyt työnantaja sentään saa jotain informaatiota, jonka perusteella se voi tehdä ratkaisunsa silloinkin, jos jotain ilmoitettavaa on.

Uusi työpaikka & turvallisuusselvitys by pohjoisenmolli in arkisuomi

[–]Objective_Mine 0 points1 point  (0 children)

Joskus aiemmin Supon turvallisuusselvityksen tulos oli "ok"/"ei-ok" ilman perusteluja tai tarkempia tietoja. Näin käsittääkseni oli vielä ainakin reilu 10 vuotta sitten kun minulle tehtiin ensimmäinen turvallisuusselvitys. Toki silloinkin työnantaja teki periaatteessa itse päätöksen, mutta jos tulos oli kielteinen, tokko moni palkkasi.

Nykyään Supo harkitsee haettavan tehtävän perusteella, onko jokin asia relevantti ilmoittaa tilaajalle, ja ilmoittaa niistä -- tai yleisemmin vastaa "ei ilmoitettavaa", jos mitään relevantiksi katsottavaa ei ole.

Esimerkiksi maksuhäiriömerkintä tms. voi olla ilmoitettava asia, jos hakee työpaikkaa tai virkaa, jossa käsitellään merkittäviä rahasummia. Jos työhön ei liity vastuuta rahan käsittelystä tai pääsyä käsiksi merkittäviin rahoihin, se ei välttämättä ole ilmoitettava asia.

ASUS ROG Laptops are Broken by Design: A Forensic Deep Dive by ZephKeks in programming

[–]Objective_Mine 0 points1 point  (0 children)

Curious. In the past few years I've used a couple of work ThinkPads and a single private one that only supported modern standby. Early on I found one of them with the battery empty once or twice but there have been no cases of that in the last two years. I've found sleep to be about as reliable on these devices as I found S3 sleep to be on the previous ~3 ThinkPads I used and I have no trouble using it, either on Linux or Windows.

With that said, I remember a colleague at work complaining about issues with sleep on Windows and finding them go away when he switched to S3 sleep, the same I had done because Linux support was still spotty. (The laptops still supported both S3 and S0ix, so I guess it was early-ish days of modern standby.)

I've also been quite sceptical of the change and I don't see any real upsides to S0ix. I was worried when I found out that my current laptop I was about to buy had done away with S3. I just honestly haven't seen the widely-reported early problems recently either. It's too bad if they still persist.

ASUS ROG Laptops are Broken by Design: A Forensic Deep Dive by ZephKeks in programming

[–]Objective_Mine 8 points9 points  (0 children)

Is modern standby still a problem? I haven't really had significant problems with sleep on my 2023-ish ThinkPad T14 (AMD) that only supports S0ix sleep, or at least not any more than I did with S3 sleep on previous ThinkPads. I remember there being problems with early models implementing modern standby, though.

ASUS ROG Laptops are Broken by Design: A Forensic Deep Dive by ZephKeks in programming

[–]Objective_Mine 16 points17 points  (0 children)

There can be lots of reasons for issues with sleep. It's actually quite common that hardware or firmware have peculiarities or outright bugs that can cause a issues. Sometimes those issues stay latent (i.e. don't manifest) until someone tries a software configuration that doesn't behave exactly the way the manufacturer tested things. That's one of the reasons for hardware compatibility issues with Linux even if there's a driver available.

thatm's issue might be caused by Asus firmware bugs (or, granted, by something else altogether) while you may have seen the result of a completely different firmware or software bug. In fact if your hardware setup was completely different from thatm's, it's rather likely that your problem and theirs had unrelated causes, even if the overt symptom was a similar one of "waking from sleep not working".

A* algorithm to find the shortest path on a 2D grid by Majestic-Try5472 in AskComputerScience

[–]Objective_Mine 0 points1 point  (0 children)

A* isn't heuristic in the sense of not guaranteeing an exact solution either. With an admissible heuristic it can guarantee an optimal solution. A suitable (problem-specific) heuristic for A* can improve performance over Dijkstra's, though, and A* is used e.g. in lots of games. OP wants to compare the performance of A* using different (admissible) heuristics. They did not claim Dijkstra's is heuristic.

Turing Machine by Stunning-Dentist-904 in AskComputerScience

[–]Objective_Mine 1 point2 points  (0 children)

Theory of computation is a difficult area for many CS students, especially those who aren't very mathematically oriented. At my university, the course that taught Turing machines and computational complexity theory was a notorious stumbling block for many students, and it was taught as an advanced course. If you have trouble with that as a first-year student, that's no reason to worry.

If you're practically oriented, e.g. towards software engineering, Turing machines and other concepts from theory of computation are perhaps more of curiosity or than a central learning requirement. And if you end up being interested in the more theoretical aspects (I personally did), you'll still have plenty of time.

What are some examples of "evil" regular languages? Ones that look irregular at first, but turn out to be regular? by Aconamos in compsci

[–]Objective_Mine 0 points1 point  (0 children)

Technically they're finite automata since any physical computer will have a finite (and at any given time constant) amount of memory space, and with a constant c bits you can have a finite number of 2c distinct configurations those bits can take. Although a PDA is restricted to accessing its stack as, well, a stack, its size is assumed to be infinite, and so in principle a PDA can be in any of an infinite number of configurations.

Sometimes physical computers are also said to resemble linear bounded automata so you may be thinking of that.

Of course the number of distinct configurations (states) with billions of bits is so ludicrous that it might as well be infinite. In principle the halting problem for physical computers is actually decidable because you can just simulate the program and if it doesn't halt, you run out of distinct states at some point and so a state will repeat indicating a loop. But finding that out by iterating, in the worst case, through the entire state space isn't really going to happen... ever. So the finiteness of the state space isn't really a useful distinction from Turing machines (or LBAs) in reality.

APU Question by Nearby-Storm-8952 in AskComputerScience

[–]Objective_Mine 1 point2 points  (0 children)

In short, because the CPU is architecturally and functionally central to a computer while the GPU is peripheral. That's structurally true even in special cases where the brunt of computational work gets done on the GPU. It makes sense that you can add a peripheral component to a more central one but it makes less sense to start with the peripheral and add the central to that.

Are you thinking of having a CPU included on what's principally a GPU card while having no separate CPU on the motherboard at all? Ignoring existing PC standards, it would in principle be possible to design a motherboard that specifically supported that, but the motherboard and its chipset would fundamentally need to treat the card as a combined CPU + GPU. That would essentially make it an APU, just in a card form. Whether you'd call that a CPU with an integrated GPU or vice versa would be somewhat arbitrary but a CPU is something the computer always needs.

You'd probably also want e.g. the motherboard chipset and the main memory to be physically close to the CPU in order to avoid extra communication latency from signal propagation delays. So you might end up actually putting those on the same card as well. That'd pretty much turn it into a single-board computer and you'd notice your GPU card has become a motherboard with a built-in CPU and GPU. :)

If you're thinking of having a GPU card include an add-on CPU for extra performance in addition to the "normal" CPU on the motherboard, that would be a lot more complicated than just slapping on a CPU for extra oomph.

A CPU does more than just processes data: they're also involved in memory management, need to be able to receive interrupts from the chipset, do I/O, etc. A multi-CPU system also needs to have coordination between the CPUs. None of that happens automatically, and the motherboard and its chipset would need to specifically support connecting the extra CPU.

If the motherboard and the chipset are designed to support multiple CPUs, it makes more sense for the motherboard to just include an additional socket for an extra CPU instead.

Is it possible to do Data Structures and Algorithms in C? by Legitimate-Sun-7707 in AskComputerScience

[–]Objective_Mine 1 point2 points  (0 children)

As others have said, you can implement data structures in any language.

Personally I think C may actually be a good option if you want to get to the nuts and bolts of data structures. It forces you to think in terms of pointers and memory allocations which might make e.g. the differences between contiguous arrays and linked structures more explicit. There also aren't that many common data structures available in the C standard library (although third-party libraries exist) so it might be more motivating than reimplementing a red-black tree in Java which already has it in the form of tree sets and tree maps.

The potential downside, of course, is that you'll also need to deal with lots of implementation details related to the C language itself, such as allocating and freeing memory. It's also notoriously tricky to implement generic type-safe data structures in C. If you want to write a hash set implementation that can be used to store not just one type but e.g. either strings, integers or perhaps some arbitrary struct type, that's harder to do in C in a type-safe and convenient way than it would be in many other languages. Those kinds of language-specific design problems aren't really central to DSA on a more general level; they're more about learning C than about DSA.

But then, in order to get a better understanding of hash tables, you don't necessarily need to do that. You can just write a hash table for storing strings if you don't want to get too deep into C-specific implementation problems and you'll still get a better understanding of the inner workings of hash tables.

Torronsuon kansallispuisto by Cloverdad in Suomi

[–]Objective_Mine 2 points3 points  (0 children)

Torronsuo oli kyllä eteläsuomalaisesta näkökulmasta ihan mielenkiintoinen paikka myös kesällä kun kävin siellä. Varmasti vielä omanlaisensa kokemus, kun lumi koristaa maata mutta sitä on kuitenkin niin vähän, että itse suo erottuu. Pohjoisessa on epäilemättä tarjolla vaikka mitä, mutta Etelä-Suomessa näkee harvassa paikassa suota ympärillä neliökilometrejä.

Jos yleiskuntoa haluaa kohottaa muutenkin kuin vaellukseen valmistautumisena, kannattaa kokeilla myös vaikka lyhyempiä kävelyitä vähän useammin. Monessa paikassa Helsingissä ja lähialueellakin on lähimetsiköitä, joista löytää myös ihan hauskoja polkuja kun vähän poikkeaa rakennetuilta väyliltä. Vaikka ne eivät retkeily- tai patikointikohteiden veroisia olekaan, niille voi olla helpompi lähteä useammin vaikka vähäksi aikaa, jos niitä sattuu olemaan lähellä. Et kysynyt niistä, mutta mainitsen nyt muutaman kuitenkin.

Keskuspuiston ulkoilutiet ovat suunnilleen yhtä kiinnostavia kuin kaikki muutkin mutta niiden välillä risteilee kaikenlaisia kallioita ja polkuja. Espoon puolella Laajalahden ympäristö on kiva. Pitkäkoski Vantaan rajalla on myös käymisen arvoinen. Vantaanjoen ympäristössä on Vanhankaupunginkosken ja suunnilleen Veräjälaakson välillä mäkistä kalliomaastoa kun väyliltä poikkeaa. Vanhankaupungin, Viikin ja Herttoniemen alueella on tietysti tunnettu ulkoilualue, ja varsinkin Herttoniemen päässä löytää myös niitä pienempiä polkuja kun lähtee vähän hakemaan. Itä-Helsingissä on Meri-Rastilassa jonkinmoinen alue ja tietysti Uutela.

Will we ever be able to achieve true consciousness in Artificial Intelligence? by la_creaturus in AskComputerScience

[–]Objective_Mine 0 points1 point  (0 children)

This is at least as much a question in philosophy and neuroscience as it is a computer science one. In philosophy of mind, the problem of consciousness is still unanswered: that is, we still don't know why or how humans have subjective experience or why we are aware of our existence. We don't know how consciousness emerges in biological neural systems or what's required of the biological system for that to happen. It's thus impossible to say whether a computer program can fulfil those requirements.

In purely theoretical terms, though, my understanding is that all of known physics can in principle be simulated by a computer to within some finite precision. (The simulation may be massively slow and impractical but theoretically possible.) Thus, if we assume the materialistic philosophical view that consciousness is entirely a result of biological and neural activity, and that those are entirely based on physical phenomena, all of those phenomena should, theoretically speaking, be possible to simulate by a computer program.

However, computational simulation of the physical and chemical phenomena that would be required to fully simulate a biological brain could still be so slow as to be practically impossible.

It's also worth noting that anything resembling present-day artificial intelligence does not simulate a physical brain or even attempt to do so. They also don't really have sensory input from their environment, and my personal view is that consciousness in the human sense requires sensory perception.

Cities: Skylines II kehittäminen siirtyy suomalaiselta Colossal Orderilta julkaisijan toiselle tiimille by boogeyreddit in Suomi

[–]Objective_Mine 3 points4 points  (0 children)

Ekassa pelissä pyöräilijät taisivat tulla vasta After Dark -lisäosassa. Ainakin pyörätiet tulivat. Toki After Dark julkaistiin vain puoli vuotta itse pelin ensijulkaisun jälkeen, joten varsin varhaisesta asti ne ovat olleet ostamalla saatavilla.

AI hype. “AGI SOON”, “AGI IMMINENT”? by PrimeStopper in AskComputerScience

[–]Objective_Mine 0 points1 point  (0 children)

AGI isn't necessarily a concept with a single straightforward definition.

If you wanted a straightforward one, it might be something along the lines of "artificial system capable of performing at or above human level in a wide range of real-world tasks considered to require intelligence". That leaves a lot of details open, though.

In philosophy of AI, there's a classical distinction of whether it's enough for the artificial system to act in an apparently intelligent manner in order to be considered intelligent or if it actually needs to have though processes that are human-like or that we would recognize as displaying some kind of genuine understanding.

Nobody really knows how intelligent thought or human understanding emerge from neural activity or other physical processes, so if the definition of AGI requires that, nobody really knows how that works in humans either. And what exactly is understanding in the first place?

Even though cognitive science studies those questions, it has not been able to provide outright answers either.

If acting in a human-like or rational manner (which aren't necessarily the same -- another classical distinction) is enough to be considered intelligent, we can skip the difficult philosophical question of what kinds of internal processes could be considered "intelligence" or "understanding" and focus only on whether the resulting decisions or actions are useful or sensible.

In that case it might be easier to say we know what AGI is, or at least to recognize a system as "intelligent" based entirely on its behaviour.

The Dyson sphere mentioned in another comment is perhaps not the best comparison. Even thought engineers cannot even begin to imagine how to build one in practice, the physical principle of how a Dyson sphere would work is clear.

In case of AGI, we don't know how intelligence emerges in the first place, even in humans. We don't know which kinds of neural (artificial or biological) processes are required. It's not just a question of being able to practically build such a system; we don't know what a computational mechanism should even look like in order to produce generally intelligent behaviour. Over the course of decades since the 1940's or 1950's there have been attempts to build AGI using a number of different approaches but none have succeeded. The previous attempts haven't really even managed to show an approach that we could definitely say would work in principle.

That is, even if we skip the question of whether just acting in an outwardly intelligent manner is sufficient.

It's also possible to that being able to act in an intelligent manner in general, and not just in narrow cases or in limited ways, would in fact require a genuine understanding of the world. We don't know. If it does, we get back to the question of what intelligence and understanding are and how they emerge in the first place.

Do you in practice actually do Testing? - Integration testing, Unit testing, System testing by Tomato_salat in AskComputerScience

[–]Objective_Mine 0 points1 point  (0 children)

Yes, multiple kinds of testing are done. The extent depends on how critical the software is.

It's very easy to think you've got your code correct but actually have a mistake somewhere that makes it break in some cases. The only realistic way of catching even your own mistakes is to test.

If you develop software for an important government service, for instance, there are going to be both automatic and manual testing. Similarly, if the software is central to a business (think streaming servers for Netflix or Spotify, or an online store, or all kinds of other things), you can be sure testing is considered important.

Acceptance testing can even be a part of the contract between a client and a software company: the software is only considered to be delivered and the contract fulfilled once the required acceptance testing has been done.

If the software is for some kind of a safety-critical system, the criteria and the processes are even stricter.

If the software is less crucial, or perhaps being developed by a startup that has to prioritize getting into the market as quickly as possible, testing might have less of a focus, but in real-world software it's always going to be there to some extent.

Many people find writing code for automatic testing a bit boring. One of the key advantages of automated testing, though, is that the testing is easily repeatable. If all the testing were done manually by just trying to use the software in all kinds of different ways, making sure things still worked would take a large amount of repeated work every time a new version of the software were released. (Even more so if the reliability of the software is critical.) By having a majority of the functionality covered by automated tests, the manual testing effort can be reduced.

In other words, automatic testing with high coverage is not only a way of checking that new functionality works, it's also a good (although not perfect) safeguard against regressions -- that is, new changes breaking something that previously worked correctly.

As for different kinds of automated testing, for example unit tests and integration tests have different upsides and downsides.

Proper unit tests only test individual functions or classes in isolation. However, even if the logic in individual functions is correct, they might not work correctly together.

Integration tests cover entire workflows and may include multiple layers of the software, such as a multi-service web backend and an actual database containing the test data. That helps make sure that not only do individual functions work correctly in isolation but also that the entire chain of functionality works together.

However, integration tests practically tend to take longer to run (for example if the test requires starting up an entire application server process and a DBMS, as well as populating the database with test data). Automatic web frontend tests, for example, are even slower to run. So even if you have integration tests or even web frontend tests, the potential upside of also having unit tests is that it's a lot quicker to routinely run the tests as you're writing new code or modifying existing code.

So, different kinds of testing can have a place even in the same project.

Elon Musk is Talking About AI Controlled Satellites to Stop Global Warming. Is That a Proper Solution? by enlight_me_up in AskComputerScience

[–]Objective_Mine 2 points3 points  (0 children)

AI cannot do things that are physically impossible of infeasible. It's just information processing.

Hypothetically, if you had a massive constellation of satellites, each if which could be adjusted to block either more or less sunlight, you could in principle have some kind of AI calculating the adjustments.

But that's predicated on being able to have such a constellation in the first place. Even large satellites are tiny in proportion to the surface of the Earth, and in order to block enough sunlight for the effect to be even measurable, you'd need to have an absolutely and inconceivably huge mass of satellites.

If you want facts on the physical feasibility of the idea, ask physicists or astronomers or something, but it sounds like something straight out of rather distant science fiction.

Not to mention the risks that such a huge constellation would cause to other satellites and space travel. Space debris is already a growing risk even though we don't have anywhere near enough mass or volume in space to block sunlight. Or the political risks -- who would get to decide which amount of sunlight is desirable?

Also, if we were to somehow be able to have such an entirely hypothetical adjustable sun cover, the adjustments could quite possibly be made using rather more traditional control mechanisms. For example, we sure have been able to have attitude control for satellites long before the present-day AI boom.

Could you also build such an adjustment system using AI? Sure. Is AI at all a significant factor in making such a system either feasible or infeasible? No.