I recetly watched a SICP lecture in which Sussman demonstrated how it was possible to implement Scheme's cons
car
and cdr
using nothing but procudures.
It went something like this:
(define (cons x y)
(lambda (m) (m x y)))
(define (car z)
(z (lambda (p q) p)))
This got me thinking; What excatly are data structures? When a language is created, are data structures implemented as abstractions built up by procedures? If they are just made up of procedures, what are procedures really at the lowest level?
I guess what I want to know is what is at the bottom of the abstraction chain (unless it happens to be abstractions all the way down). At what point does it become hardware?
Cool question!
It becomes hardware whenever someone hands it off to calls to the processor (CPU). If you have to read a manual from a chip manufacturer to understand how to do what you want to do, you're at the level you describe.
(source: micro-examples.com)
Here's an overview of the path from physics to hardware to code to users: http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-002-circuits-and-electronics-spring-2007/video-lectures/lecture-1/ although I don't think it's to do with lambda calculus. (But you're just saying that's what inspired the question—it's not the question per se, right?)
There is a step where the person who writes the language has to interface with processor instructions, eg https://en.wikipedia.org/wiki/X86_instruction_listings || https://en.wikipedia.org/wiki/X86_calling_conventions.
Peek at kernel programming or think about embedded systems (in a microwave, on the wing of an airplane), ARMs, or mobile devices—these are the things people program with that have non-laptop chipsets.
People who write BLAS (linear algebra solver libraries) sometimes get down to this level of detail. For example https://en.wikipedia.org/wiki/Math_Kernel_Library or https://en.wikipedia.org/wiki/Automatically_Tuned_Linear_Algebra_Software. When they say the BLAS is "tuned" what do they mean? They're talking about sniffing out facts about your processor and changing how inner-inner-inner loops make their decisions to waste less time with the way the physical thing is configured.
(source: hswstatic.com)
If I recall correctly, high-level programming languages (like C
;) ) make no assumptions about what system they will be running on so they make agnostic calls that run like ten times† slower than they would if they knew ahead of time which kind of call to make. Every. Time. This is the kind of thing that could drive you insane, but it's that classic engineering tradeoff of the technical people's time versus the end-user's performance. I guess if you become a kernel programmer or embedded-systems programmer you can help put an end to all of the wasted clock cycles on computers across the globe—processors getting hot as they waste a lot of pointless going back-and-forth. (Although there are clearly worse things that go on in the world.)
†: I just quickly searched how much BLAS speedups are and yeah, it can be a factor of like 15 or 20. So I don't think I'm exaggerating / misremembering what I heard about wasted movement. BTW, there is something similar goes on in power generation: the final step (the turbine) in power generation is only like 20% efficient. Doesn't all that waste just drive you crazy?! Time to become an engineer. ;)
A cool project you could check out is MenuetOS; someone wrote an operating system in assembler.
Yet more cool stuff to look at could be this guy who says it's actually pretty easy and fun to learn x86
assembly language. (!)
(source: iqsdirectory.com)
You can also read back on the olden days when there was less distance between software and hardware (eg, programming with a punchcard). Thankfully people have written "high-level" languages that are more like the way people talk and think and less like moving some tape around. Data structures may not be the most obvious thing from everyday conversation but they are more abstract than [lists a sequence of GOTO and STORE instructions...]
.
HTH