From 270ec29912db299c4307e5c7bd724b4bcd26c981 Mon Sep 17 00:00:00 2001 From: Jared Ping Date: Mon, 29 Jan 2024 19:19:06 +0200 Subject: [PATCH 1/8] Optimisations: Remove incomplete sparsity matrix filter illustration (included further below) --- contents/optimizations/optimizations.qmd | 2 -- 1 file changed, 2 deletions(-) diff --git a/contents/optimizations/optimizations.qmd b/contents/optimizations/optimizations.qmd index de19f8e11..ba9d70267 100644 --- a/contents/optimizations/optimizations.qmd +++ b/contents/optimizations/optimizations.qmd @@ -755,8 +755,6 @@ Different devices may have different memory hierarchies. Optimizing for the spec Pruning is a fundamental approach to compress models to make them compatible with resource constrained devices. This results in sparse models where a lot of weights are 0's. Therefore, leveraging this sparsity can lead to significant improvements in performance. Tools were created to achieve exactly this. RAMAN, is a sparseTinyML accelerator designed for inference on edge devices. RAMAN overlap input and output activations on the same memory space, reducing storage requirements by up to 50%. [@krishna2023raman] -![A figure showing the sparse columns of the filter matrix of a CNN that are aggregated to create a dense matrix that, leading to smaller dimensions in the matrix and more efficient computations. [@kung2018packing] - #### Optimization Frameworks Optimization Frameworks have been introduced to exploit the specific capabilities of the hardware to accelerate the software. One example of such a framework is hls4ml. This open-source software-hardware co-design workflow aids in interpreting and translating machine learning algorithms for implementation with both FPGA and ASIC technologies, enhancing their. Features such as network optimization, new Python APIs, quantization-aware pruning, and end-to-end FPGA workflows are embedded into the hls4ml framework, leveraging parallel processing units, memory hierarchies, and specialized instruction sets to optimize models for edge hardware. Moreover, hls4ml is capable of translating machine learning algorithms directly into FPGA firmware. From 8e795c7cf736e070d3fd603c1ba4d1ae571e0442 Mon Sep 17 00:00:00 2001 From: Jared Ping Date: Mon, 29 Jan 2024 19:21:10 +0200 Subject: [PATCH 2/8] Optimisations: Missing reference for quantization-aware pruning --- contents/optimizations/optimizations.bib | 13 +++++++++++++ contents/optimizations/optimizations.qmd | 2 +- 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/contents/optimizations/optimizations.bib b/contents/optimizations/optimizations.bib index 7b78c4f2c..602561bf4 100644 --- a/contents/optimizations/optimizations.bib +++ b/contents/optimizations/optimizations.bib @@ -473,3 +473,16 @@ @misc{zhou2021analognets title = {{AnalogNets:} {Ml-hw} Co-Design of Noise-robust {TinyML} Models and Always-On Analog Compute-in-Memory Accelerator}, year = {2021} } + +@article{hawks2021psandqs, + title = {Ps and Qs: Quantization-Aware Pruning for Efficient Low Latency Neural Network Inference}, + volume = {4}, + ISSN = {2624-8212}, + url = {http://dx.doi.org/10.3389/frai.2021.676564}, + DOI = {10.3389/frai.2021.676564}, + journal = {Frontiers in Artificial Intelligence}, + publisher = {Frontiers Media SA}, + author = {Hawks, Benjamin and Duarte, Javier and Fraser, Nicholas J. and Pappalardo, Alessandro and Tran, Nhan and Umuroglu, Yaman}, + year = {2021}, + month = jul +} \ No newline at end of file diff --git a/contents/optimizations/optimizations.qmd b/contents/optimizations/optimizations.qmd index ba9d70267..955a15b83 100644 --- a/contents/optimizations/optimizations.qmd +++ b/contents/optimizations/optimizations.qmd @@ -649,7 +649,7 @@ Accuracy: The reduction in numerical precision post-quantization can lead to a d ### Quantization and Pruning -Pruning and quantization work well together, and it's been found that pruning doesn't hinder quantization. In fact, pruning can help reduce quantization error. Intuitively, this is due to pruning reducing the number of weights to quantize, thereby reducing the accumulated error from quantization. For example, an unpruned AlexNet has 60 million weights to quantize whereas a pruned AlexNet only has 6.7 million weights to quantize. This significant drop in weights helps reduce the error between quantizing the unpruned AlexNet vs. the pruned AlexNet. Furthermore, recent work has found that quantization-aware pruning generates more computationally efficient models than either pruning or quantization alone; It typically performs similar to or better in terms of computational efficiency compared to other neural architecture search techniques like Bayesian optimization [Ps and Qs: Quantization-Aware Pruning for Efficient Low Latency Neural Network Inference][2021](https://arxiv.org/pdf/2102.11289.pdf). +Pruning and quantization work well together, and it's been found that pruning doesn't hinder quantization. In fact, pruning can help reduce quantization error. Intuitively, this is due to pruning reducing the number of weights to quantize, thereby reducing the accumulated error from quantization. For example, an unpruned AlexNet has 60 million weights to quantize whereas a pruned AlexNet only has 6.7 million weights to quantize. This significant drop in weights helps reduce the error between quantizing the unpruned AlexNet vs. the pruned AlexNet. Furthermore, recent work has found that quantization-aware pruning generates more computationally efficient models than either pruning or quantization alone; It typically performs similar to or better in terms of computational efficiency compared to other neural architecture search techniques like Bayesian optimization [(@hawks2021psandqs)](https://arxiv.org/pdf/2102.11289.pdf). ![Accuracy v.s. compression rate under different compression methods. Pruning and quantization works best when combined (@han2015deep).](images/png/efficientnumerics_qp1.png){#fig-compression-methods} From 8f08fd79bf4717a7f4a5ebee3902bfcffce30282 Mon Sep 17 00:00:00 2001 From: Jared Ping Date: Mon, 29 Jan 2024 19:41:53 +0200 Subject: [PATCH 3/8] Benchmarking: Fixed reference rendering --- contents/benchmarking/benchmarking.qmd | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/contents/benchmarking/benchmarking.qmd b/contents/benchmarking/benchmarking.qmd index ff7e30d31..083276ea5 100644 --- a/contents/benchmarking/benchmarking.qmd +++ b/contents/benchmarking/benchmarking.qmd @@ -132,11 +132,11 @@ Macro-benchmarks provide a holistic view, assessing the end-to-end performance o Examples: These benchmarks evaluate the AI model: -* [MLPerf Inference][https://github.com/mlcommons/inference](@reddi2020mlperf): An industry-standard set of benchmarks for measuring the performance of machine learning software and hardware. MLPerf has a suite of dedicated benchmarks for specific scales, such as [MLPerf Mobile](https://github.com/mlcommons/mobile_app_open) for mobile class devices and [MLPerf Tiny](https://github.com/mlcommons/tiny), which focuses on microcontrollers and other resource-constrained devices. +* [MLPerf Inference](https://github.com/mlcommons/inference)(@reddi2020mlperf): An industry-standard set of benchmarks for measuring the performance of machine learning software and hardware. MLPerf has a suite of dedicated benchmarks for specific scales, such as [MLPerf Mobile](https://github.com/mlcommons/mobile_app_open) for mobile class devices and [MLPerf Tiny](https://github.com/mlcommons/tiny), which focuses on microcontrollers and other resource-constrained devices. * [EEMBC's MLMark](https://github.com/eembc/mlmark): A benchmarking suite for evaluating the performance and power efficiency of embedded devices running machine learning workloads. This benchmark provides insights into how different hardware platforms handle tasks like image recognition or audio processing. -* [AI-Benchmark][https://ai-benchmark.com/](@ignatov2018ai): A benchmarking tool designed for Android devices, it valuates the performance of AI tasks on mobile devices, encompassing various real-world scenarios like image recognition, face parsing, and optical character recognition. +* [AI-Benchmark](https://ai-benchmark.com/)(@ignatov2018ai): A benchmarking tool designed for Android devices, it valuates the performance of AI tasks on mobile devices, encompassing various real-world scenarios like image recognition, face parsing, and optical character recognition. #### End-to-end Benchmarks @@ -240,9 +240,9 @@ Training metrics, when viewed from a systems perspective, offer insights that tr The following metrics are often considered important: -1. **Training Time:** The time taken to train a model from scratch until it reaches a satisfactory performance level. It is a direct measure of the computational resources required to train a model. For example, [Google's BERT][https://arxiv.org/abs/1810.04805](@devlin2018bert) model is a natural language processing model that requires several days to train on a massive corpus of text data using multiple GPUs. The long training time is a significant challenge in terms of resource consumption and cost. +1. **Training Time:** The time taken to train a model from scratch until it reaches a satisfactory performance level. It is a direct measure of the computational resources required to train a model. For example, [Google's BERT](https://arxiv.org/abs/1810.04805)(@devlin2018bert) model is a natural language processing model that requires several days to train on a massive corpus of text data using multiple GPUs. The long training time is a significant challenge in terms of resource consumption and cost. -2. **Scalability:** How well the training process can handle increases in data size or model complexity. Scalability can be assessed by measuring training time, memory usage, and other resource consumption as data size or model complexity increases. [OpenAI's GPT-3][https://arxiv.org/abs/2005.14165](@brown2020language) model has 175 billion parameters, making it one of the largest language models in existence. Training GPT-3 required extensive engineering efforts to scale up the training process to handle the massive model size. This involved the use of specialized hardware, distributed training, and other techniques to ensure that the model could be trained efficiently. +2. **Scalability:** How well the training process can handle increases in data size or model complexity. Scalability can be assessed by measuring training time, memory usage, and other resource consumption as data size or model complexity increases. [OpenAI's GPT-3](https://arxiv.org/abs/2005.14165)(@brown2020language) model has 175 billion parameters, making it one of the largest language models in existence. Training GPT-3 required extensive engineering efforts to scale up the training process to handle the massive model size. This involved the use of specialized hardware, distributed training, and other techniques to ensure that the model could be trained efficiently. 3. **Resource Utilization:** The extent to which the training process utilizes available computational resources such as CPU, GPU, memory, and disk I/O. High resource utilization can indicate an efficient training process, while low utilization can suggest bottlenecks or inefficiencies. For instance, training a convolutional neural network (CNN) for image classification requires significant GPU resources. Utilizing multi-GPU setups and optimizing the training code for GPU acceleration can greatly improve resource utilization and training efficiency. @@ -441,7 +441,7 @@ Keyword spotting was selected as a task because it is a common usecase in TinyML #### Dataset -[Google Speech Commands][https://www.tensorflow.org/datasets/catalog/speech_commands](@warden2018speech) was selected as the best dataset to represent the task. The dataset is well established in the research community and has permissive licensing which allows it to be easily used in a benchmark. +[Google Speech Commands](https://www.tensorflow.org/datasets/catalog/speech_commands)(@warden2018speech) was selected as the best dataset to represent the task. The dataset is well established in the research community and has permissive licensing which allows it to be easily used in a benchmark. #### Model @@ -457,7 +457,7 @@ MLPerf Tiny uses [EEMBCs EnergyRunner™ benchmark harness](https://github.com/e #### Baseline Submission -Baseline submissions are critical for contextualizing results and acting as a reference point to help participants get started. The baseline submission should prioritize simplicity and readability over state of the art performance. The keyword spotting baseline uses a standard [STM microcontroller](https://www.st.com/en/microcontrollers-microprocessors.html) as it's hardware and [TensorFlow Lite for Microcontrollers][https://www.tensorflow.org/lite/microcontrollers](@david2021tensorflow) as it's inference framework. +Baseline submissions are critical for contextualizing results and acting as a reference point to help participants get started. The baseline submission should prioritize simplicity and readability over state of the art performance. The keyword spotting baseline uses a standard [STM microcontroller](https://www.st.com/en/microcontrollers-microprocessors.html) as it's hardware and [TensorFlow Lite for Microcontrollers](https://www.tensorflow.org/lite/microcontrollers)(@david2021tensorflow) as it's inference framework. ### Challenges and Limitations @@ -520,7 +520,7 @@ Standardization of benchmarks is another important solution to mitigate benchmar Third-party verification of results can also be a valuable tool in mitigating benchmark engineering. This involves having an independent third party verify the results of a benchmark test to ensure their credibility and reliability. Third-party verification can help to build confidence in the results and can provide a valuable means of validating the performance and capabilities of AI systems. -Resource: [Benchmarking TinyML Systems: Challenges and Directions][https://arxiv.org/pdf/2003.04821.pdf](@banbury2020benchmarking) +Resource: [Benchmarking TinyML Systems: Challenges and Directions](https://arxiv.org/pdf/2003.04821.pdf)(@banbury2020benchmarking) ![](images/png/mlperf_tiny.png) @@ -556,7 +556,7 @@ Source: #### COCO (2014) -The [Common Objects in Context (COCO) dataset][https://cocodataset.org/](@lin2014microsoft), released in 2014, further expanded the landscape of machine learning datasets by introducing a richer set of annotations. COCO consists of images containing complex scenes with multiple objects, and each image is annotated with object bounding boxes, segmentation masks, and captions. This dataset has been instrumental in advancing research in object detection, segmentation, and image captioning. +The [Common Objects in Context (COCO) dataset](https://cocodataset.org/)(@lin2014microsoft), released in 2014, further expanded the landscape of machine learning datasets by introducing a richer set of annotations. COCO consists of images containing complex scenes with multiple objects, and each image is annotated with object bounding boxes, segmentation masks, and captions. This dataset has been instrumental in advancing research in object detection, segmentation, and image captioning. ![](images/png/coco.png) ​​ From 7785a303188d2b53aa5d6e2dc54d62eb1df78e31 Mon Sep 17 00:00:00 2001 From: Jared Ping Date: Mon, 29 Jan 2024 19:44:32 +0200 Subject: [PATCH 4/8] Ops: Fix figure 14.3 rendering --- contents/ops/ops.qmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contents/ops/ops.qmd b/contents/ops/ops.qmd index f5075a109..55907cf4b 100644 --- a/contents/ops/ops.qmd +++ b/contents/ops/ops.qmd @@ -239,7 +239,7 @@ Tight coupling between ML model components makes isolating changes difficult. Mo ### Correction Cascades -![Figure 14.3: The flowchart depicts the concept of correction cascades in the ML workflow, from problem statement to model deployment. The arcs represent the potential iterative corrections needed at each stage of the workflow, with different colors corresponding to distinct issues such as interacting with physical world brittleness, inadequate application-domain expertise, conflicting reward systems, and poor cross-organizational documentation. The red arrows indicate the impact of cascades, which can lead to significant revisions in the model development process, while the dotted red line represents the drastic measure of abandoning the process to restart. This visual emphasizes the complex, interconnected nature of ML system development and the importance of addressing these issues early in the development cycle to mitigate their amplifying effects downstream. [@sculley2015hidden](images/png/data_cascades.png) +![Figure 14.3: The flowchart depicts the concept of correction cascades in the ML workflow, from problem statement to model deployment. The arcs represent the potential iterative corrections needed at each stage of the workflow, with different colors corresponding to distinct issues such as interacting with physical world brittleness, inadequate application-domain expertise, conflicting reward systems, and poor cross-organizational documentation. The red arrows indicate the impact of cascades, which can lead to significant revisions in the model development process, while the dotted red line represents the drastic measure of abandoning the process to restart. This visual emphasizes the complex, interconnected nature of ML system development and the importance of addressing these issues early in the development cycle to mitigate their amplifying effects downstream. [@sculley2015hidden]](images/png/data_cascades.png) Building models sequentially creates risky dependencies where later models rely on earlier ones. For example, taking an existing model and fine-tuning it for a new use case seems efficient. However, this bakes in assumptions from the original model that may eventually need correction. From 0be34fb564ec7d6d81c5c7d43b26ba64217e16d8 Mon Sep 17 00:00:00 2001 From: Jared Ping Date: Mon, 29 Jan 2024 19:59:57 +0200 Subject: [PATCH 5/8] Security: Fix video render blocking rendering of remaining content --- contents/privacy_security/privacy_security.qmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contents/privacy_security/privacy_security.qmd b/contents/privacy_security/privacy_security.qmd index 6b78f1614..5153bbed6 100644 --- a/contents/privacy_security/privacy_security.qmd +++ b/contents/privacy_security/privacy_security.qmd @@ -80,7 +80,7 @@ This breach was significant due to its sophistication; Stuxnet specifically targ The Jeep Cherokee hack was a groundbreaking event demonstrating the risks inherent in increasingly connected automobiles [@miller2019lessons]. In a controlled demonstration, security researchers remotely exploited a vulnerability in the Uconnect entertainment system, which had a cellular connection to the internet. They were able to control the vehicle's engine, transmission, and brakes, alarming the automotive industry into recognizing the severe safety implications of cyber vulnerabilities in vehicles. -{{< video title="Hackers Remotely Kill a Jeep on a Highway" }} +{{< video title="Hackers Remotely Kill a Jeep on a Highway" >}} While this wasn't an attack on an ML system per se, the reliance of modern vehicles on embedded systems for safety-critical functions has significant parallels to the deployment of ML in embedded systems, underscoring the need for robust security at the hardware level. From ce0becd4d50c060910af8d92766e68eeef59fd89 Mon Sep 17 00:00:00 2001 From: Jared Ping Date: Mon, 29 Jan 2024 20:03:58 +0200 Subject: [PATCH 6/8] Security: Fix video URL special character blocking rendering of remaining content --- contents/privacy_security/privacy_security.qmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/contents/privacy_security/privacy_security.qmd b/contents/privacy_security/privacy_security.qmd index 5153bbed6..e38c1aeee 100644 --- a/contents/privacy_security/privacy_security.qmd +++ b/contents/privacy_security/privacy_security.qmd @@ -352,7 +352,7 @@ The example above shows how we can infer information about the encryption proces For additional details, please see the following video: -{{< video title="ECED4406 - 0x501 Power Analysis Attacks" }} +{{< video title="ECED4406 - 0x501 Power Analysis Attacks" >}} Another example is an ML system for speech recognition, which processes voice commands to perform actions. By measuring the time it takes for the system to respond to commands or the power used during processing, an attacker could infer what commands are being processed and thus learn about the system's operational patterns. Even more subtle, the sound emitted by a computer's fan or hard drive could change in response to the workload, which a sensitive microphone could pick up and analyze to determine what kind of operations are being performed. From 9dd810adb7aeb4ed137bbe63fcd81d8e8acd0ac8 Mon Sep 17 00:00:00 2001 From: Jared Ping Date: Mon, 29 Jan 2024 20:13:45 +0200 Subject: [PATCH 7/8] Security: Fix reference rendering + correct grammar --- contents/privacy_security/privacy_security.qmd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contents/privacy_security/privacy_security.qmd b/contents/privacy_security/privacy_security.qmd index e38c1aeee..d988ddcf5 100644 --- a/contents/privacy_security/privacy_security.qmd +++ b/contents/privacy_security/privacy_security.qmd @@ -650,11 +650,11 @@ Techniques like de-identification, aggregation, anonymization, and federation ca Many embedded ML applications handle sensitive user data under HIPAA, GDPR, and CCPA regulations. Understanding the protections mandated by these laws is crucial for building compliant systems. -* [HIPAA]( HIPAA Privacy Rule establishes,care providers that conduct certain)governs medical data privacy and security in the US, with severe penalties for violations. Any health-related embedded ML devices like diagnostic wearables or assistive robots would need to implement controls like audit trails, access controls, and encryption prescribed by HIPAA. +* [HIPAA]( CCPA applies to for,, households, or devices; or)in California focuses on protecting consumer data privacy through provisions like required disclosures and opt-out rights. IoT gadgets like smart speakers and fitness trackers used byCalifornians would likely fall under its scope. +* [CCPA]() CCPA applies to for,, households, or devices; or)in California focuses on protecting consumer data privacy through provisions like required disclosures and opt-out rights. IoT gadgets like smart speakers and fitness trackers used byCalifornians would likely fall under its scope. * CCPA was the first state specific set of regulations surrounding privacy concerns. Following the CCPA, similar regulations were also enacted in [10 other states](https://pro.bloomberglaw.com/brief/state-privacy-legislation-tracker/), with some states proposing bills for consumer data privacy protections. From 39b7bdecc5a88562737e22cf353a995f8cc82715 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Fri, 2 Feb 2024 20:04:25 +0000 Subject: [PATCH 8/8] Update readme and contributors.qmd with contributors --- .all-contributorsrc | 125 ++++++++++++++++++++------------------ README.md | 43 ++++++------- contents/contributors.qmd | 43 ++++++------- 3 files changed, 110 insertions(+), 101 deletions(-) diff --git a/.all-contributorsrc b/.all-contributorsrc index 672b89b3c..309346942 100644 --- a/.all-contributorsrc +++ b/.all-contributorsrc @@ -27,13 +27,6 @@ "profile": "https://github.com/V0XNIHILI", "contributions": [] }, - { - "login": "mpstewart1", - "name": "Matthew Stewart", - "avatar_url": "https://avatars.githubusercontent.com/mpstewart1", - "profile": "https://github.com/mpstewart1", - "contributions": [] - }, { "login": "Naeemkh", "name": "naeemkh", @@ -51,10 +44,17 @@ { "login": "NaN", "name": "Maximilian Lam", - "avatar_url": "https://www.gravatar.com/avatar/eee539bff154270654c048ed80399cac?d=identicon&s=100", + "avatar_url": "https://www.gravatar.com/avatar/f122ace5ce5e395656404dd58a006770?d=identicon&s=100", "profile": "https://github.com/harvard-edge/cs249r_book/graphs/contributors", "contributions": [] }, + { + "login": "mpstewart1", + "name": "Matthew Stewart", + "avatar_url": "https://avatars.githubusercontent.com/mpstewart1", + "profile": "https://github.com/mpstewart1", + "contributions": [] + }, { "login": "Mjrovai", "name": "Marcelo Rovai", @@ -105,10 +105,10 @@ "contributions": [] }, { - "login": "srivatsankrishnan", - "name": "Srivatsan Krishnan", - "avatar_url": "https://avatars.githubusercontent.com/srivatsankrishnan", - "profile": "https://github.com/srivatsankrishnan", + "login": "JaredP94", + "name": "Jared Ping", + "avatar_url": "https://avatars.githubusercontent.com/JaredP94", + "profile": "https://github.com/JaredP94", "contributions": [] }, { @@ -118,6 +118,13 @@ "profile": "https://github.com/andreamurillomtz", "contributions": [] }, + { + "login": "srivatsankrishnan", + "name": "Srivatsan Krishnan", + "avatar_url": "https://avatars.githubusercontent.com/srivatsankrishnan", + "profile": "https://github.com/srivatsankrishnan", + "contributions": [] + }, { "login": "arnaumarin", "name": "arnaumarin", @@ -135,7 +142,7 @@ { "login": "NaN", "name": "Aghyad Deeb", - "avatar_url": "https://www.gravatar.com/avatar/2467dcf4d78d20bd24e7207985130915?d=identicon&s=100", + "avatar_url": "https://www.gravatar.com/avatar/099300c94c7f0e5a80ef69c6db4f2f29?d=identicon&s=100", "profile": "https://github.com/harvard-edge/cs249r_book/graphs/contributors", "contributions": [] }, @@ -154,10 +161,10 @@ "contributions": [] }, { - "login": "MichaelSchnebly", - "name": "Michael Schnebly", - "avatar_url": "https://avatars.githubusercontent.com/MichaelSchnebly", - "profile": "https://github.com/MichaelSchnebly", + "login": "jared-ni", + "name": "Jared Ni", + "avatar_url": "https://avatars.githubusercontent.com/jared-ni", + "profile": "https://github.com/jared-ni", "contributions": [] }, { @@ -167,13 +174,6 @@ "profile": "https://github.com/ELSuitorHarvard", "contributions": [] }, - { - "login": "jared-ni", - "name": "Jared Ni", - "avatar_url": "https://avatars.githubusercontent.com/jared-ni", - "profile": "https://github.com/jared-ni", - "contributions": [] - }, { "login": "Ekhao", "name": "Emil Njor", @@ -188,6 +188,20 @@ "profile": "https://github.com/oishib", "contributions": [] }, + { + "login": "MichaelSchnebly", + "name": "Michael Schnebly", + "avatar_url": "https://avatars.githubusercontent.com/MichaelSchnebly", + "profile": "https://github.com/MichaelSchnebly", + "contributions": [] + }, + { + "login": "BaeHenryS", + "name": "Henry Bae", + "avatar_url": "https://avatars.githubusercontent.com/BaeHenryS", + "profile": "https://github.com/BaeHenryS", + "contributions": [] + }, { "login": "jaywonchung", "name": "Jae-Won Chung", @@ -203,10 +217,10 @@ "contributions": [] }, { - "login": "BaeHenryS", - "name": "Henry Bae", - "avatar_url": "https://avatars.githubusercontent.com/BaeHenryS", - "profile": "https://github.com/BaeHenryS", + "login": "jzhou1318", + "name": "Jennifer Zhou", + "avatar_url": "https://avatars.githubusercontent.com/jzhou1318", + "profile": "https://github.com/jzhou1318", "contributions": [] }, { @@ -230,6 +244,13 @@ "profile": "https://github.com/eurashin", "contributions": [] }, + { + "login": "ShvetankPrakash", + "name": "Shvetank Prakash", + "avatar_url": "https://avatars.githubusercontent.com/ShvetankPrakash", + "profile": "https://github.com/ShvetankPrakash", + "contributions": [] + }, { "login": "colbybanbury", "name": "Colby Banbury", @@ -244,13 +265,6 @@ "profile": "https://github.com/AditiR-42", "contributions": [] }, - { - "login": "ShvetankPrakash", - "name": "Shvetank Prakash", - "avatar_url": "https://avatars.githubusercontent.com/ShvetankPrakash", - "profile": "https://github.com/ShvetankPrakash", - "contributions": [] - }, { "login": "arbass22", "name": "Andrew Bass", @@ -258,13 +272,6 @@ "profile": "https://github.com/arbass22", "contributions": [] }, - { - "login": "jzhou1318", - "name": "Jennifer Zhou", - "avatar_url": "https://avatars.githubusercontent.com/jzhou1318", - "profile": "https://github.com/jzhou1318", - "contributions": [] - }, { "login": "alex-oesterling", "name": "Alex Oesterling", @@ -287,10 +294,10 @@ "contributions": [] }, { - "login": "abigailswallow", - "name": "abigailswallow", - "avatar_url": "https://avatars.githubusercontent.com/abigailswallow", - "profile": "https://github.com/abigailswallow", + "login": "jessicaquaye", + "name": "Jessica Quaye", + "avatar_url": "https://avatars.githubusercontent.com/jessicaquaye", + "profile": "https://github.com/jessicaquaye", "contributions": [] }, { @@ -310,7 +317,7 @@ { "login": "NaN", "name": "Annie Laurie Cook", - "avatar_url": "https://www.gravatar.com/avatar/0bc20a0b747a3e9e5c7d975f5f664839?d=identicon&s=100", + "avatar_url": "https://www.gravatar.com/avatar/860069c9618800e963c1d820d1aecb67?d=identicon&s=100", "profile": "https://github.com/harvard-edge/cs249r_book/graphs/contributors", "contributions": [] }, @@ -328,13 +335,6 @@ "profile": "https://github.com/skmur", "contributions": [] }, - { - "login": "eezike", - "name": "Emeka Ezike", - "avatar_url": "https://avatars.githubusercontent.com/eezike", - "profile": "https://github.com/eezike", - "contributions": [] - }, { "login": "ciyer64", "name": "Curren Iyer", @@ -342,10 +342,17 @@ "profile": "https://github.com/ciyer64", "contributions": [] }, + { + "login": "abigailswallow", + "name": "abigailswallow", + "avatar_url": "https://avatars.githubusercontent.com/abigailswallow", + "profile": "https://github.com/abigailswallow", + "contributions": [] + }, { "login": "NaN", "name": "Costin-Andrei Oncescu", - "avatar_url": "https://www.gravatar.com/avatar/e6cf18ae420c323190b4233e5286ff32?d=identicon&s=100", + "avatar_url": "https://www.gravatar.com/avatar/1617d6e68c2a6b2eb2a6f3d8eb0efc5b?d=identicon&s=100", "profile": "https://github.com/harvard-edge/cs249r_book/graphs/contributors", "contributions": [] }, @@ -364,10 +371,10 @@ "contributions": [] }, { - "login": "jessicaquaye", - "name": "Jessica Quaye", - "avatar_url": "https://avatars.githubusercontent.com/jessicaquaye", - "profile": "https://github.com/jessicaquaye", + "login": "eezike", + "name": "Emeka Ezike", + "avatar_url": "https://avatars.githubusercontent.com/eezike", + "profile": "https://github.com/eezike", "contributions": [] } ], diff --git a/README.md b/README.md index 929b87c0f..aa0cce7c4 100644 --- a/README.md +++ b/README.md @@ -57,12 +57,12 @@ Please note that the cs249r project is released with a [Contributor Code of Cond Vijay Janapa Reddi
Vijay Janapa Reddi

Ikechukwu Uchendu
Ikechukwu Uchendu

Douwe den Blanken
Douwe den Blanken

- Matthew Stewart
Matthew Stewart

naeemkh
naeemkh

+ ishapira
ishapira

- ishapira
ishapira

- Maximilian Lam
Maximilian Lam

+ Maximilian Lam
Maximilian Lam

+ Matthew Stewart
Matthew Stewart

Marcelo Rovai
Marcelo Rovai

Jayson Lin
Jayson Lin

Jeffrey Ma
Jeffrey Ma

@@ -72,60 +72,61 @@ Please note that the cs249r project is released with a [Contributor Code of Cond Korneel Van den Berghe
Korneel Van den Berghe

eliasab16
eliasab16

Alex Rodriguez
Alex Rodriguez

- Srivatsan Krishnan
Srivatsan Krishnan

+ Jared Ping
Jared Ping

Andrea Murillo
Andrea Murillo

+ Srivatsan Krishnan
Srivatsan Krishnan

arnaumarin
arnaumarin

Aghyad Deeb
Aghyad Deeb

- Aghyad Deeb
Aghyad Deeb

- Zishen
Zishen

+ Aghyad Deeb
Aghyad Deeb

+ Zishen
Zishen

Divya
Divya

- Michael Schnebly
Michael Schnebly

- ELSuitorHarvard
ELSuitorHarvard

Jared Ni
Jared Ni

+ ELSuitorHarvard
ELSuitorHarvard

Emil Njor
Emil Njor

oishib
oishib

+ Michael Schnebly
Michael Schnebly

+ Henry Bae
Henry Bae

Jae-Won Chung
Jae-Won Chung

Mark Mazumder
Mark Mazumder

- Henry Bae
Henry Bae

- Marco Zennaro
Marco Zennaro

+ Jennifer Zhou
Jennifer Zhou

+ Marco Zennaro
Marco Zennaro

Pong Trairatvorakul
Pong Trairatvorakul

eurashin
eurashin

- Colby Banbury
Colby Banbury

- Aditi Raju
Aditi Raju

Shvetank Prakash
Shvetank Prakash

+ Colby Banbury
Colby Banbury

+ Aditi Raju
Aditi Raju

Andrew Bass
Andrew Bass

- Jennifer Zhou
Jennifer Zhou

Alex Oesterling
Alex Oesterling

Gauri Jain
Gauri Jain

- Eric D
Eric D

- abigailswallow
abigailswallow

+ Eric D
Eric D

+ Jessica Quaye
Jessica Quaye

Jason Yik
Jason Yik

happyappledog
happyappledog

- Annie Laurie Cook
Annie Laurie Cook

- Shreya Johri
Shreya Johri

+ Annie Laurie Cook
Annie Laurie Cook

+ Shreya Johri
Shreya Johri

Sonia Murthy
Sonia Murthy

- Emeka Ezike
Emeka Ezike

Curren Iyer
Curren Iyer

- Costin-Andrei Oncescu
Costin-Andrei Oncescu

- Vijay Edupuganti
Vijay Edupuganti

+ abigailswallow
abigailswallow

+ Costin-Andrei Oncescu
Costin-Andrei Oncescu

+ Vijay Edupuganti
Vijay Edupuganti

The Random DIY
The Random DIY

- Jessica Quaye
Jessica Quaye

+ Emeka Ezike
Emeka Ezike

diff --git a/contents/contributors.qmd b/contents/contributors.qmd index 40a7ecc08..4b85571eb 100644 --- a/contents/contributors.qmd +++ b/contents/contributors.qmd @@ -72,12 +72,12 @@ We extend our sincere thanks to the diverse group of individuals who have genero Vijay Janapa Reddi
Vijay Janapa Reddi

Ikechukwu Uchendu
Ikechukwu Uchendu

Douwe den Blanken
Douwe den Blanken

- Matthew Stewart
Matthew Stewart

naeemkh
naeemkh

+ ishapira
ishapira

- ishapira
ishapira

- Maximilian Lam
Maximilian Lam

+ Maximilian Lam
Maximilian Lam

+ Matthew Stewart
Matthew Stewart

Marcelo Rovai
Marcelo Rovai

Jayson Lin
Jayson Lin

Jeffrey Ma
Jeffrey Ma

@@ -87,60 +87,61 @@ We extend our sincere thanks to the diverse group of individuals who have genero Korneel Van den Berghe
Korneel Van den Berghe

eliasab16
eliasab16

Alex Rodriguez
Alex Rodriguez

- Srivatsan Krishnan
Srivatsan Krishnan

+ Jared Ping
Jared Ping

Andrea Murillo
Andrea Murillo

+ Srivatsan Krishnan
Srivatsan Krishnan

arnaumarin
arnaumarin

Aghyad Deeb
Aghyad Deeb

- Aghyad Deeb
Aghyad Deeb

- Zishen
Zishen

+ Aghyad Deeb
Aghyad Deeb

+ Zishen
Zishen

Divya
Divya

- Michael Schnebly
Michael Schnebly

- ELSuitorHarvard
ELSuitorHarvard

Jared Ni
Jared Ni

+ ELSuitorHarvard
ELSuitorHarvard

Emil Njor
Emil Njor

oishib
oishib

+ Michael Schnebly
Michael Schnebly

+ Henry Bae
Henry Bae

Jae-Won Chung
Jae-Won Chung

Mark Mazumder
Mark Mazumder

- Henry Bae
Henry Bae

- Marco Zennaro
Marco Zennaro

+ Jennifer Zhou
Jennifer Zhou

+ Marco Zennaro
Marco Zennaro

Pong Trairatvorakul
Pong Trairatvorakul

eurashin
eurashin

- Colby Banbury
Colby Banbury

- Aditi Raju
Aditi Raju

Shvetank Prakash
Shvetank Prakash

+ Colby Banbury
Colby Banbury

+ Aditi Raju
Aditi Raju

Andrew Bass
Andrew Bass

- Jennifer Zhou
Jennifer Zhou

Alex Oesterling
Alex Oesterling

Gauri Jain
Gauri Jain

- Eric D
Eric D

- abigailswallow
abigailswallow

+ Eric D
Eric D

+ Jessica Quaye
Jessica Quaye

Jason Yik
Jason Yik

happyappledog
happyappledog

- Annie Laurie Cook
Annie Laurie Cook

- Shreya Johri
Shreya Johri

+ Annie Laurie Cook
Annie Laurie Cook

+ Shreya Johri
Shreya Johri

Sonia Murthy
Sonia Murthy

- Emeka Ezike
Emeka Ezike

Curren Iyer
Curren Iyer

- Costin-Andrei Oncescu
Costin-Andrei Oncescu

- Vijay Edupuganti
Vijay Edupuganti

+ abigailswallow
abigailswallow

+ Costin-Andrei Oncescu
Costin-Andrei Oncescu

+ Vijay Edupuganti
Vijay Edupuganti

The Random DIY
The Random DIY

- Jessica Quaye
Jessica Quaye

+ Emeka Ezike
Emeka Ezike