Empathic Architectures and Systems
I am currently involved on the
Empathic Systems Project. The main premise that drives this work is that
the ultimate goal of a computer is to satisfy the end user. Although this may
seem obvious, the design, evaluation, and optimization of current architectures
often leaves the user out-of-the-loop. If we can learn about a user's
satisfaction with the computer system and then leverage this information for
optimization, we can (1) improve the efficiency of the computer, and (2)
improve the user's overall experience with computer.
In a recent ISCA 2008
publication, we show that there exists considerable user
variation; variation in the needs and expectations of an individual
user relative to the system performance. This variation represents potential
for optimization. In this work, we show that if can learn user satisfaction,
can use a mapping from hardware performance counters to a prediction of user
satisfaction for optimization purposes.
One of the major challenges in this work is to develop methods to understand
user satisfaction during the run of an application. To this end, we have
proposed the idea (in WACI
2008) of incorporating biometric sensors into future architectures for
providing the computer with user-related information. Initial results
motivating the use of biometric sensors for user-aware power optimization
appears in MICRO 2008.
Our recent work in
MICRO 2009 involves studying
user activity patterns to (1) characterize the power consumption of real
mobile devices in real usage environments, and (2) guide the development of
Run-time Profiling and Optimization
As the interactions between hardware and software grow increasingly complex, it
is becoming critical to employ run-time profiling for dynamically
detecting optimization opportunities, and run-time optmization techniques for
leveraging these opportunities.
- Hardware-assisted Path Profiling:
I have shown that infrequent hardware branch trace samples (taken using the
Itanium-2 Branch Target Buffer) can be mapped to the control flow graph in a
compiler infrastructure to obtain a relatively accurate profile of hot
execution paths in an
application. This work is published at the
INTERACT-7 workshop and
became the bulk of my M.S. thesis.
- Shadow Profiling: I have co-authored a paper on
using shadow processes for low-overhead dynamic binary instrumentation. The
main idea is that periodically, you can fork the program to create a shadow
process, instrument the shadow process, and run it on a different core in
multi-core machines. Shadow profiling allows very fine-grained profiling at a
low overhead when parallel hardware exists. This work is published at
- Dynamic Binary Optimization:
I am currently collaborating with AMD on an dynamic binary
optimizer targeted at leveraging multi-core architectures for improving the
performance of legacy x86 binaries.
Software-Implemented Transient Fault Tolerance
Trends in CMOS scaling indicate that future generations of processors may
contain transistors that are very susceptible to transient faults (an error
that occurs when a bit is randomly flipped). Although, hardware approaches
exist, the design and fabrication of hardware is costly. Software may provide
a cheaper and more flexible alternative. I developed a prototype software-only
system that uses process-level redundancy to protect single-threaded
applications from transient faults. The system creates redundant processes
which can be scheduled to multiple threads/cores, and automatically checks
execution between the processes to ensure correct execution. From a research
perspective, the project makes a two important contributions:
This work is published at
DSN 2007 (extended journal
version in IEEE TDSC).
- The prototype shows that a software redundant multi-threading
approach can be low-overhead and rival previously proposed hardware techniques.
- Shows that fault tolerance at the application level has the benefits of
automatically including mechanisms for fault detection (i.e. segfaults can be
considered a detection mechanism) and more importantly, application-level fault
tolerance is effective at avoiding many benign faults which do not propagate to
affect software correctness.