diff --git a/CNAME b/CNAME
new file mode 100644
index 0000000000..1a0f8574de
--- /dev/null
+++ b/CNAME
@@ -0,0 +1 @@
+abhinavmuraleedharan.com
\ No newline at end of file
diff --git a/data/Abhinav_Muraleedharan_CV_Dec.pdf b/data/Abhinav_Muraleedharan_CV_Dec.pdf
new file mode 100644
index 0000000000..6fa02f9225
Binary files /dev/null and b/data/Abhinav_Muraleedharan_CV_Dec.pdf differ
diff --git a/data/Abhinav_Muraleedharan_CV_Sep.pdf b/data/Abhinav_Muraleedharan_CV_Sep.pdf
new file mode 100644
index 0000000000..c7b9f8f5be
Binary files /dev/null and b/data/Abhinav_Muraleedharan_CV_Sep.pdf differ
diff --git a/data/JonBarron-bio.txt b/data/JonBarron-bio.txt
index 1cfc8ed14f..1d3152fdc7 100644
--- a/data/JonBarron-bio.txt
+++ b/data/JonBarron-bio.txt
@@ -1,9 +1 @@
-Jon Barron is a senior staff research scientist at Google Research in San
-Francisco, where he works on computer vision and machine learning. He received
-a PhD in Computer Science from the University of California, Berkeley in 2013,
-where he was advised by Jitendra Malik, and he received a Honours BSc in
-Computer Science from the University of Toronto in 2007. He received a National
-Science Foundation Graduate Research Fellowship in 2009, the C.V. Ramamoorthy
-Distinguished Research Award in 2013, and the PAMI Young Researcher Award in
-2020. His works have received awards at ECCV 2016, TPAMI 2016, ECCV 2020, ICCV
-2021, CVPR 2022, the 2022 Communications of the ACM, and ICLR 2023.
\ No newline at end of file
+I am a graduate student at the University of Toronto. At UofT, I work under the supervision of Prof. Nathan Wiebe. I'm interested in theoretical aspects of reinforcement learning and quantum computing.
\ No newline at end of file
diff --git a/data/MEng_thesis_v2.pdf b/data/MEng_thesis_v2.pdf
new file mode 100644
index 0000000000..886482af5a
Binary files /dev/null and b/data/MEng_thesis_v2.pdf differ
diff --git a/data/abhinav_bdp.bib b/data/abhinav_bdp.bib
new file mode 100644
index 0000000000..fdc8b55e16
--- /dev/null
+++ b/data/abhinav_bdp.bib
@@ -0,0 +1,6 @@
+@article{muraleedharan2023beyond,
+ title={Beyond dynamic programming},
+ author={Muraleedharan, Abhinav},
+ journal={arXiv preprint arXiv:2306.15029},
+ year={2023}
+}
\ No newline at end of file
diff --git a/data/thesis.pdf b/data/thesis.pdf
new file mode 100644
index 0000000000..df56c8e9e0
Binary files /dev/null and b/data/thesis.pdf differ
diff --git a/formal_analysis_of_life.html b/formal_analysis_of_life.html
new file mode 100644
index 0000000000..a30efc4f91
--- /dev/null
+++ b/formal_analysis_of_life.html
@@ -0,0 +1,120 @@
+
+
+
+
+
+ Formal Analysis of Life
+
+
+
+
+
+
+
Formal Analysis of Life
+
Abhinav Muraleedharan
+
+
+
+
+
+
Death
+
Any discussion about life should start with death. Death is the singular event that adds infinite value to every single second of human life. Hence, the most important task in any person's life is to figure out how to optimally allocate this resource of infinite value (time).
+
On a day to day basis, in our professional lives, we deal with questions like: What projects should we work on? Who should we spend time with? What books should I spend time reading? Which job/company should I choose?
+
There are two ways to answer these questions. The first way, as many people do, is to compute the return on any selected choices and choose the one with the maximum expected cumulative return. In the context of selecting projects or jobs, this would mean one should choose one that provides the highest return/salary.
+
This line of thinking however ignores the first point which we mentioned. Every second of our life is of infinite value. However, the return on any project is finite. Should we waste time by spending it on projects that have limited significance? Or should you work on the grand open challenges in science or at the frontiers of technology? If not monetary value, how to make decisions that might look sub-optimal in the short term, but optimal in the long term? The big shift here is that we consider time as the only currency we have, and see life decisions as investments of time, in some form.
+
+
+
+
Defining 'Purpose'
+
Mathematically, physical systems can be seen as optimization engines, minimizing some functional over time. For instance, the dynamics of a pendulum is defined by the minimization of the Lagrangian, \( L = \int ( T(t) -V(t) )dt \). Your brain is a physical system, and you can view it as an optimization engine. In the case of the brain, assuming some notion of free will, we will have some access to choosing the form of the Lagrangian functional.
+
+
+
+
The Optimization Problem
+
Pure Impact
+
\[ J = \int_{t=0}^{T} I(t) \]
+
Pure Understanding
+
\[ J = \int_{t=0}^{T} U(t) \]
+
Understanding + Impact
+
\[ J = \int_{t=0}^{T} I(t) + U(t) \]
+
+
+
+
Power
+
After choosing the functional, now comes the difficult part. Computing actions that would minimize the functional over a long time horizon. This step is difficult, because for instance, say if your goal is to understand the universe, then your actions would involve proof steps to solve the open problems in theoretical physics, or in other words- solving the open problems. If your choice is to maximize impact, then...
+
The word 'Power' has a bad connotation to it. People think power is evil, and often mistrusts people of power. Power simply means how much control you have over the state of the world. Can you drive the state of the world to the state you desire? Power can be roughly categorized into three.
+
+
A Rough Classification
+
Political Power
+
Political power simply means how much you can influence the behavior of another person by communication. If you're charismatic, a great speaker, then you'll have a high degree of power over other people.
+
Intellectual Power
+
Intellectual power is proportional to how deeply you can think about a topic without losing attention.
+
Economic Power
+
Economic Power is proportional to the amount of money you have in the bank.
+
+
Exponential Ascent and Exponential Descent
+
The most important thing to keep in mind regarding power is that there are only two trajectories of power. As you acquire more power, you have a much better chance of growing power. Either you acquire power exponentially, or your power decays exponentially into death.
+
+
+
+
Qualities
+
Attention (Focus)
+
...
+
Perseverance
+
...
+
+
+
+
Birth
+
This article began with death. Although all we have is finite time in this universe of infinite complexity, having been born as a sentient being is a thing by itself. What's the probability of me writing this article and you reading this? Close to zero. The ultimate gift in one's life is life itself. Being alive, pondering the big questions, is a gift. Go and make every second count.
+
+
+
+
Acknowledgements
+
...
+
+
+
+
+
+
diff --git a/images/Abhinav.png b/images/Abhinav.png
new file mode 100644
index 0000000000..a2aaaffb61
Binary files /dev/null and b/images/Abhinav.png differ
diff --git a/images/House_Cup.png b/images/House_Cup.png
new file mode 100644
index 0000000000..910c754a08
Binary files /dev/null and b/images/House_Cup.png differ
diff --git a/images/UofT.png b/images/UofT.png
new file mode 100644
index 0000000000..28c0a216ae
Binary files /dev/null and b/images/UofT.png differ
diff --git a/images/bdp.jpeg b/images/bdp.jpeg
new file mode 100644
index 0000000000..4ac28621a2
Binary files /dev/null and b/images/bdp.jpeg differ
diff --git a/images/misc.jpeg b/images/misc.jpeg
new file mode 100644
index 0000000000..8e73c27fd7
Binary files /dev/null and b/images/misc.jpeg differ
diff --git a/index.html b/index.html
index 741ac7da0d..ae05658eb8 100755
--- a/index.html
+++ b/index.html
@@ -1,9 +1,9 @@
- Jon Barron
+ Abhinav Muraleedharan
-
+
@@ -18,24 +18,25 @@
- Jon Barron
+ Abhinav Muraleedharan
-
I am a senior staff research scientist at Google Research in San Francisco, where I work on computer vision and machine learning.
+
I am a graduate student at the University of Toronto. At UofT, I work under the supervision of Prof. Nathan Wiebe and Prof. Roger Grosse. My research interests span Quantum Algorithms, Reinforcment Learning, and Alignment of large language models.
- I'm interested in computer vision, machine learning, optimization, and image processing. Much of my research is about inferring the physical world (shape, motion, color, light, etc) from images. Representative papers are highlighted.
+ My reinforcement learning research focuses on the development of efficient reinforcement learning algorithms for training generally intelligent agents. I also work on developing efficient quantum algorithms for training large scale machine learning models.
Combining DreamBooth (personalized text-to-image) and DreamFusion (text-to-3D) yields high-quality, subject-specific 3D assets with text-driven modifications
- Representing neural fields as a composition of manipulable and interpretable components lets you do things like reason about frequencies and scale.
-
- We denoise images efficiently by predicting spatially-varying kernels at low resolution and using a fast fused op to jointly upsample and apply these kernels at full resolution.
-
- A simple and fast Bayesian algorithm that can be written in ~10 lines of code outperforms or matches giant CNNs on image binarization, and unifies three classic thresholding algorithms.
-
- Extensive experimentation yields a simple optical flow technique that is trained on only unlabeled videos, but still works as well as supervised techniques.
-
A single robust loss function is a superset of many other common robust loss functions, and allows training to automatically adapt the robustness of its own loss.
Color space can be aliased, allowing white balance models to be learned and evaluated in the frequency domain. This improves accuracy by 13-20% and speed by 250-3000x.
By embedding a stereo optimization problem in "bilateral-space" we can very quickly solve for an edge-aware depth map, letting us render beautiful depth-of-field effects.
- We present a technique for efficient per-voxel linear classification, which enables accurate and fast semantic segmentation of volumetric Drosophila imagery.
-
By embedding mixtures of shapes & lights into a soft segmentation of an image, and by leveraging the output of the Kinect, we can extend SIRFS to scenes.
-
- TPAMI Journal version: version / bibtex
-
In this paper, I introduced Score-life programming, a novel theoretical approach for solving reinforcement learning problems. In contrast with classical dynamic programming-based methods, the methods in this work can search over non-stationary policy functions, and can directly compute optimal infinite horizon action sequences from a given state.
Markov Decision Problems which lie in a low-dimensional latent space can be decomposed, allowing modified RL algorithms to run orders of magnitude faster in parallel.
- Feel free to steal this website's source code. Do not scrape the HTML from this page itself, as it includes analytics tags that you do not want on your own website — use the github code instead. Also, consider using Leonid Keselman's Jekyll fork of this page.
+ This website code is borrowed from: source code. Do not scrape the HTML from this page itself, as it includes analytics tags that you do not want on your own website — use the github code instead. Also, consider using Leonid Keselman's Jekyll fork of this page.