Is there any area of research which…

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
D

deleted1142566

…explores faster computing methods?

I’m not talking about quantum computing or any new technology. I’m taking about tricks within arithmetic that can be used to speed up computing by making use of heuristics and shortcuts (like Shakuntala Devi).

For example:

248 + 208 = 456

You could solve this by doing regular addition and carrying the 1.

But a faster mental math trick/heuristic is the following:

248 + 208 = 200 + 200 + 48 + 8 = 400 + 56 = 456

Many so called “human calculators” use such heuristics/shortcuts to get to answers quickly. Essentially, they see patterns within the problem that allows them to solve it quickly.

And if you could combine such human heuristics with actual computing power, then theoretically couldn’t you get even faster results?

Just curious, since I just spent a weekend with my Indian grandmother who showed me many such heuristics within arithmetics that she learned from her guru.

Members don't see this ad.
 
I know this is long dead but for anyone who comes across this and is curious, yes there are fields that explore increasing the speed and efficiency of computations. This would fall under computer science (if we're dealing with computers) or maybe even electrical engineering if we are talking chip design. BUT generally the research questions nowadays deal with complex algorithms that take a long time to run. Computers add and subtract differently than we do. Multiplication is essentially addition, and division generally uses "tricks" to work (usually clever strings of addition or multiplication). The operations in isolation are very fast (to a point). They tend to be only limited by the clock speed of the CPU and the number of cores to carry out the operations - to us they occur practically instantly.

There are some limitations to using "heuristics" like a human would. Simple math operations are done via hardware. ALL calculations done by the computer must be translated to binary and then applied to these hardware operators (usually adders and multipliers). The architecture is set in stone so to speak. Could you design a chip that uses different architectures to optimize performance for a particular problem? Yes. GPU cores are a good example, they are designed to rapidly compute very specific algorithms. There are thousands of these cores in a single GPU. The trade-off is that they are not very useful for general computation.

So, with that in mind, heuristics ARE used but maybe not in the way you expect. The math that we type into a computer (whether your pc, cell phone, or calculator) is translated into a formatted that is optimal for the hardware. The computer does NOT do these operations in the same exact way as you or I. So there is not a lot of room to speed up the underlying math. If you were to apply common human "tricks" you would just be adding layer of code that is then translated onto the same physical operators anyway.

At some point, there is a question of diminishing returns. Most code is not exactly optimized. No-one is writing instructions in binary anymore - there are already multiple layers of abstraction that occur in the form of compilers and high level programing languages. These are the areas in which optimization occurs most often i.e., designing efficient code or using a lower level language or designing a new algorithm to handle a hard problem. If you've already optimized everything else, you could try to speed up the computation of huge numbers (or simple operations done thousands of times). But when dealing with basic calculations, we cycle back to the hardware limitations - there is no way (without designing a new chip) to speed up 1+1.
 
Top