Skip to content

Commit

Permalink
Deploying to gh-pages from @ ca53e5d 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
sawhney-medha committed Feb 29, 2024
1 parent 58c3cf7 commit 14e66cd
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion assets/jupyter/blog.ipynb.html

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion feed.xml
Original file line number Diff line number Diff line change
@@ -1 +1 @@
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="4.3.3">Jekyll</generator><link href="https://sawhney-medha.github.io/feed.xml" rel="self" type="application/atom+xml"/><link href="https://sawhney-medha.github.io/" rel="alternate" type="text/html" hreflang="en"/><updated>2024-02-29T08:50:58+00:00</updated><id>https://sawhney-medha.github.io/feed.xml</id><title type="html">blank</title><subtitle>Computer Science Phd Student at Virginia Tech </subtitle><entry><title type="html">Displaying External Posts on Your al-folio Blog</title><link href="https://sawhney-medha.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog/" rel="alternate" type="text/html" title="Displaying External Posts on Your al-folio Blog"/><published>2022-04-23T23:20:09+00:00</published><updated>2022-04-23T23:20:09+00:00</updated><id>https://sawhney-medha.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog</id><content type="html" xml:base="https://sawhney-medha.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog/"><![CDATA[]]></content><author><name></name></author></entry></feed>
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="4.3.3">Jekyll</generator><link href="https://sawhney-medha.github.io/feed.xml" rel="self" type="application/atom+xml"/><link href="https://sawhney-medha.github.io/" rel="alternate" type="text/html" hreflang="en"/><updated>2024-02-29T08:56:10+00:00</updated><id>https://sawhney-medha.github.io/feed.xml</id><title type="html">blank</title><subtitle>Computer Science Phd Student at Virginia Tech </subtitle><entry><title type="html">Displaying External Posts on Your al-folio Blog</title><link href="https://sawhney-medha.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog/" rel="alternate" type="text/html" title="Displaying External Posts on Your al-folio Blog"/><published>2022-04-23T23:20:09+00:00</published><updated>2022-04-23T23:20:09+00:00</updated><id>https://sawhney-medha.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog</id><content type="html" xml:base="https://sawhney-medha.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog/"><![CDATA[]]></content><author><name></name></author></entry></feed>
2 changes: 1 addition & 1 deletion publications/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
<span class="na">category</span> <span class="p">=</span> <span class="s">{Journal Publications}</span><span class="p">,</span>
<span class="na">dimension</span> <span class="p">=</span> <span class="s">{true}</span><span class="p">,</span>
<span class="na">publisher</span> <span class="p">=</span> <span class="s">{Wiley Online Library}</span><span class="p">,</span>
<span class="p">}</span></code></pre></figure> </div> </div> </div> </li></ol> <h4><b>Workshop Papers</b></h4> <h2 class="bibliography">2024</h2> <ol class="bibliography"><li> <div class="row"> <div class="col-sm-2 abbr"><abbr class="badge"><a href="https://aaai.org/aaai-conference/" rel="external nofollow noopener" target="_blank">AAAI</a></abbr></div> <div id="aaai_workshop" class="col-sm-8"> <div class="title">Are Pre-trained Vision Language Models (VLMs) Decent Zero-shot Predictors in Scientific Contexts?</div> <div class="author"> </div> <div class="periodical"> <em>AAAI</em>, 2024 </div> <div class="periodical"> Oral Presentation in Imageomics Workshop at AAAI 2024 </div> <div class="links"> <a href="https://sites.google.com/vt.edu/imageomics-aaai-24/" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Website</a> </div> <div class="badges"> <span class="altmetric-embed" data-hide-no-mentions="true" data-hide-less-than="15" data-badge-type="2" data-badge-popover="right"></span> <span class="__dimensions_badge_embed__" data-pmid="" data-hide-zero-citations="true" data-style="small_rectangle" data-legend="hover-right" style="margin-bottom: 3px;"></span> </div> </div> </div> </li></ol> <h2 class="bibliography">2023</h2> <ol class="bibliography"><li> <div class="row"> <div class="col-sm-2 preview"> <figure> <picture> <img src="/assets/img/publication_preview/cvpr.png" class="preview z-depth-1 rounded" width="auto" height="auto" alt="cvpr.png" data-zoomable="" onerror="this.onerror=null; $('.responsive-img-srcset').remove();"> </picture> </figure> </div> <div id="cvpr_workshop" class="col-sm-8"> <div class="title">Detecting and Tracking Hard-to-Detect Bacteria in Dense Porous Backgrounds</div> <div class="author"> <em>Medha Sawhney*</em>, Bhas Karmarkar*, Eric Leaman, Arka Daw, Anuj Karpatne, and Bahareh Behkam</div> <div class="periodical"> <em>CVPR</em>, 2023 </div> <div class="periodical"> Oral + Poster Presentation in CV4Animals Workshop at CVPR 2023 </div> <div class="links"> <a class="abstract btn btn-sm z-depth-0" role="button">Abs</a> <a href="https://drive.google.com/file/d/1vLgjvb7ziv9PgKG6i_6-wmAd0SbllHCU/view" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Poster</a> <a href="https://docs.google.com/presentation/d/1vTVuzMHudBhFC8muAsHCSljzGY2GG0VY/edit?usp=sharing&amp;ouid=110697798623030402492&amp;rtpof=true&amp;sd=true" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Slides</a> <a href="https://www.cv4animals.com/home" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Website</a> </div> <div class="badges"> <span class="altmetric-embed" data-hide-no-mentions="true" data-hide-less-than="15" data-badge-type="2" data-badge-popover="right"></span> <span class="__dimensions_badge_embed__" data-pmid="" data-hide-zero-citations="true" data-style="small_rectangle" data-legend="hover-right" style="margin-bottom: 3px;"></span> </div> <div class="abstract hidden"> <p>Studying bacteria motility is crucial to understanding and controlling biomedical and ecological phenomena involving bacteria. Tracking bacteria in complex environments such as polysaccharides (agar) or protein (collagen) hydrogels is a challenging task due to the lack of visually distinguishable features between bacteria and surrounding environment, making state-of-the-art methods for tracking easily recognizable objects such as pedestrians and cars unsuitable for this application. We propose a novel pipeline for detecting and tracking bacteria in bright-field microscopy videos involving bacteria in complex backgrounds. Our pipeline uses motion-based features and combines multiple models for detecting bacteria of varying difficulty levels. We apply multiple filters to prune false positive detections, and then use the SORT tracking algorithm with interpolation in case of missing detections. Our results demonstrate that our pipeline can accurately track hard-to-detect bacteria, achieving a high precision and recall.</p> </div> </div> </div> </li></ol> <h4><b>Preprints</b></h4> <h2 class="bibliography">2023</h2> <ol class="bibliography"><li> <div class="row"> <div class="col-sm-2 preview"> <figure> <picture> <img src="/assets/img/publication_preview/memtrack_architecture.png" class="preview z-depth-1 rounded" width="auto" height="auto" alt="memtrack_architecture.png" data-zoomable="" onerror="this.onerror=null; $('.responsive-img-srcset').remove();"> </picture> </figure> </div> <div id="sawhney2023memtrack" class="col-sm-8"> <div class="title">MEMTRACK: A Deep Learning-Based Approach to Microrobot Tracking in Dense and Low-Contrast Environments</div> <div class="author"> <em>Medha Sawhney</em>, Bhas Karmarkar, Eric J. Leaman, Arka Daw, Anuj Karpatne, and Bahareh Behkam</div> <div class="periodical"> <em>arXiv</em>, 2023 </div> <div class="periodical"> </div> <div class="links"> <a class="abstract btn btn-sm z-depth-0" role="button">Abs</a> <a href="http://arxiv.org/abs/2310.09441" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">arXiv</a> <a class="bibtex btn btn-sm z-depth-0" role="button">Bib</a> <a href="https://github.com/sawhney-medha/MEMTrack" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Code</a> </div> <div class="badges"> <span class="altmetric-embed" data-hide-no-mentions="true" data-hide-less-than="15" data-badge-type="2" data-badge-popover="right" data-arxiv-id="2310.09441"></span> <span class="__dimensions_badge_embed__" data-pmid="" data-hide-zero-citations="true" data-style="small_rectangle" data-legend="hover-right" style="margin-bottom: 3px;"></span> </div> <div class="abstract hidden"> <p>Tracking microrobots is challenging, considering their minute size and high speed. As the field progresses towards developing microrobots for biomedical applications and conducting mechanistic studies in physiologically relevant media (e.g., collagen), this challenge is exacerbated by the dense surrounding environments with feature size and shape comparable to microrobots. Herein, we report Motion Enhanced Multi-level Tracker (MEMTrack), a robust pipeline for detecting and tracking microrobots using synthetic motion features, deep learning-based object detection, and a modified Simple Online and Real-time Tracking (SORT) algorithm with interpolation for tracking. Our object detection approach combines different models based on the object’s motion pattern. We trained and validated our model using bacterial micro-motors in collagen (tissue phantom) and tested it in collagen and aqueous media. We demonstrate that MEMTrack accurately tracks even the most challenging bacteria missed by skilled human annotators, achieving precision and recall of 77% and 48% in collagen and 94% and 35% in liquid media, respectively. Moreover, we show that MEMTrack can quantify average bacteria speed with no statistically significant difference from the laboriously-produced manual tracking data. MEMTrack represents a significant contribution to microrobot localization and tracking, and opens the potential for vision-based deep learning approaches to microrobot control in dense and low-contrast settings. All source code for training and testing MEMTrack and reproducing the results of the paper have been made publicly available this https URL.</p> </div> <div class="bibtex hidden"> <figure class="highlight"><pre><code class="language-bibtex" data-lang="bibtex"><span class="nc">@article</span><span class="p">{</span><span class="nl">sawhney2023memtrack</span><span class="p">,</span>
<span class="p">}</span></code></pre></figure> </div> </div> </div> </li></ol> <h4><b>Workshop Papers</b></h4> <h2 class="bibliography">2024</h2> <ol class="bibliography"><li> <div class="row"> <div class="col-sm-2 abbr"><abbr class="badge"><a href="https://aaai.org/aaai-conference/" rel="external nofollow noopener" target="_blank">AAAI</a></abbr></div> <div id="aaai_workshop" class="col-sm-8"> <div class="title">Are Pre-trained Vision Language Models (VLMs) Decent Zero-shot Predictors in Scientific Contexts?</div> <div class="author"> M. Maruf, Arka Daw, <em>Medha Sawhney</em>, Kazi Sajeed Mehrab, Mridul Khurana, Harish Babu, and Anuj Karpatne</div> <div class="periodical"> <em>AAAI</em>, 2024 </div> <div class="periodical"> Oral Presentation in Imageomics Workshop at AAAI 2024 </div> <div class="links"> <a href="https://sites.google.com/vt.edu/imageomics-aaai-24/" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Website</a> </div> <div class="badges"> <span class="altmetric-embed" data-hide-no-mentions="true" data-hide-less-than="15" data-badge-type="2" data-badge-popover="right"></span> <span class="__dimensions_badge_embed__" data-pmid="" data-hide-zero-citations="true" data-style="small_rectangle" data-legend="hover-right" style="margin-bottom: 3px;"></span> </div> </div> </div> </li></ol> <h2 class="bibliography">2023</h2> <ol class="bibliography"><li> <div class="row"> <div class="col-sm-2 preview"> <figure> <picture> <img src="/assets/img/publication_preview/cvpr.png" class="preview z-depth-1 rounded" width="auto" height="auto" alt="cvpr.png" data-zoomable="" onerror="this.onerror=null; $('.responsive-img-srcset').remove();"> </picture> </figure> </div> <div id="cvpr_workshop" class="col-sm-8"> <div class="title">Detecting and Tracking Hard-to-Detect Bacteria in Dense Porous Backgrounds</div> <div class="author"> <em>Medha Sawhney*</em>, Bhas Karmarkar*, Eric Leaman, Arka Daw, Anuj Karpatne, and Bahareh Behkam</div> <div class="periodical"> <em>CVPR</em>, 2023 </div> <div class="periodical"> Oral + Poster Presentation in CV4Animals Workshop at CVPR 2023 </div> <div class="links"> <a class="abstract btn btn-sm z-depth-0" role="button">Abs</a> <a href="https://drive.google.com/file/d/1vLgjvb7ziv9PgKG6i_6-wmAd0SbllHCU/view" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Poster</a> <a href="https://docs.google.com/presentation/d/1vTVuzMHudBhFC8muAsHCSljzGY2GG0VY/edit?usp=sharing&amp;ouid=110697798623030402492&amp;rtpof=true&amp;sd=true" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Slides</a> <a href="https://www.cv4animals.com/home" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Website</a> </div> <div class="badges"> <span class="altmetric-embed" data-hide-no-mentions="true" data-hide-less-than="15" data-badge-type="2" data-badge-popover="right"></span> <span class="__dimensions_badge_embed__" data-pmid="" data-hide-zero-citations="true" data-style="small_rectangle" data-legend="hover-right" style="margin-bottom: 3px;"></span> </div> <div class="abstract hidden"> <p>Studying bacteria motility is crucial to understanding and controlling biomedical and ecological phenomena involving bacteria. Tracking bacteria in complex environments such as polysaccharides (agar) or protein (collagen) hydrogels is a challenging task due to the lack of visually distinguishable features between bacteria and surrounding environment, making state-of-the-art methods for tracking easily recognizable objects such as pedestrians and cars unsuitable for this application. We propose a novel pipeline for detecting and tracking bacteria in bright-field microscopy videos involving bacteria in complex backgrounds. Our pipeline uses motion-based features and combines multiple models for detecting bacteria of varying difficulty levels. We apply multiple filters to prune false positive detections, and then use the SORT tracking algorithm with interpolation in case of missing detections. Our results demonstrate that our pipeline can accurately track hard-to-detect bacteria, achieving a high precision and recall.</p> </div> </div> </div> </li></ol> <h4><b>Preprints</b></h4> <h2 class="bibliography">2023</h2> <ol class="bibliography"><li> <div class="row"> <div class="col-sm-2 preview"> <figure> <picture> <img src="/assets/img/publication_preview/memtrack_architecture.png" class="preview z-depth-1 rounded" width="auto" height="auto" alt="memtrack_architecture.png" data-zoomable="" onerror="this.onerror=null; $('.responsive-img-srcset').remove();"> </picture> </figure> </div> <div id="sawhney2023memtrack" class="col-sm-8"> <div class="title">MEMTRACK: A Deep Learning-Based Approach to Microrobot Tracking in Dense and Low-Contrast Environments</div> <div class="author"> <em>Medha Sawhney</em>, Bhas Karmarkar, Eric J. Leaman, Arka Daw, Anuj Karpatne, and Bahareh Behkam</div> <div class="periodical"> <em>arXiv</em>, 2023 </div> <div class="periodical"> </div> <div class="links"> <a class="abstract btn btn-sm z-depth-0" role="button">Abs</a> <a href="http://arxiv.org/abs/2310.09441" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">arXiv</a> <a class="bibtex btn btn-sm z-depth-0" role="button">Bib</a> <a href="https://github.com/sawhney-medha/MEMTrack" class="btn btn-sm z-depth-0" role="button" rel="external nofollow noopener" target="_blank">Code</a> </div> <div class="badges"> <span class="altmetric-embed" data-hide-no-mentions="true" data-hide-less-than="15" data-badge-type="2" data-badge-popover="right" data-arxiv-id="2310.09441"></span> <span class="__dimensions_badge_embed__" data-pmid="" data-hide-zero-citations="true" data-style="small_rectangle" data-legend="hover-right" style="margin-bottom: 3px;"></span> </div> <div class="abstract hidden"> <p>Tracking microrobots is challenging, considering their minute size and high speed. As the field progresses towards developing microrobots for biomedical applications and conducting mechanistic studies in physiologically relevant media (e.g., collagen), this challenge is exacerbated by the dense surrounding environments with feature size and shape comparable to microrobots. Herein, we report Motion Enhanced Multi-level Tracker (MEMTrack), a robust pipeline for detecting and tracking microrobots using synthetic motion features, deep learning-based object detection, and a modified Simple Online and Real-time Tracking (SORT) algorithm with interpolation for tracking. Our object detection approach combines different models based on the object’s motion pattern. We trained and validated our model using bacterial micro-motors in collagen (tissue phantom) and tested it in collagen and aqueous media. We demonstrate that MEMTrack accurately tracks even the most challenging bacteria missed by skilled human annotators, achieving precision and recall of 77% and 48% in collagen and 94% and 35% in liquid media, respectively. Moreover, we show that MEMTrack can quantify average bacteria speed with no statistically significant difference from the laboriously-produced manual tracking data. MEMTrack represents a significant contribution to microrobot localization and tracking, and opens the potential for vision-based deep learning approaches to microrobot control in dense and low-contrast settings. All source code for training and testing MEMTrack and reproducing the results of the paper have been made publicly available this https URL.</p> </div> <div class="bibtex hidden"> <figure class="highlight"><pre><code class="language-bibtex" data-lang="bibtex"><span class="nc">@article</span><span class="p">{</span><span class="nl">sawhney2023memtrack</span><span class="p">,</span>
<span class="na">title</span> <span class="p">=</span> <span class="s">{MEMTRACK: A Deep Learning-Based Approach to Microrobot Tracking in Dense and Low-Contrast Environments}</span><span class="p">,</span>
<span class="na">author</span> <span class="p">=</span> <span class="s">{Sawhney, Medha and Karmarkar, Bhas and Leaman, Eric J. and Daw, Arka and Karpatne, Anuj and Behkam, Bahareh}</span><span class="p">,</span>
<span class="na">year</span> <span class="p">=</span> <span class="s">{2023}</span><span class="p">,</span>
Expand Down

0 comments on commit 14e66cd

Please sign in to comment.