forked from TobyUCL/CMICHACKs_trial
-
Notifications
You must be signed in to change notification settings - Fork 1
/
index.html
336 lines (298 loc) · 22.5 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
<!DOCTYPE HTML>
<!--
Stellar by HTML5 UP
html5up.net | @ajlkn
Free for personal and commercial use under the CCA 3.0 license (html5up.net/license)
-->
<html>
<head>
<title>HAWKES HACKS</title>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1">
<script>document.getElementsByTagName("html")[0].className += " js";</script>
<link href="bootstrap/css/bootstrap.min.css" rel="stylesheet">
<link rel="stylesheet" href="assets/css/main.css" />
<noscript><link rel="stylesheet" href="assets/css/noscript.css" /></noscript>
<link rel="stylesheet" href="assets/css/style.css">
</head>
<body class="is-preload">
<!-- Wrapper -->
<div id="wrapper">
<!-- Header -->
<center>
<img src="images/logo_hawkes.svg" width="80%" height="50%" alt="" />
<h2><font color="#e6e6ff"> Third edition of the Hawkes hackathon </font></h2>
<h1><font color="#e6e6ff"> 6-8<sup>th</sup> November 2024 </font></h1>
<br>
<a href="https://www.eventbrite.co.uk/e/hawkes-hacks-tickets-1046969793837?utm-campaign=social&utm-content=attendeeshare&utm-medium=discovery&utm-term=listing&utm-source=cp&aff=ebdsshcopyurl" class="button primary"><b>Registration</b></a>
</center><p style="margin-bottom:0.3cm;">
<!-- Nav -->
<nav id="nav">
<ul>
<li><a href="#projectsubmision" class="button primary"><b>Project Submission</b></a></li>
<li><a href="#projects" class="active">Projects</a></li>
<li><a href="#program">Program</a></li>
<li><a href="#location">Location</a></li>
<li><a href="edition2022/index.html">Edition 2022</a></li>
<li><a href="edition2023/index.html">Edition 2023</a></li>
</ul>
</nav>
<!-- Main -->
<div id="main">
<!-- Project submission -->
<section id="projectsubmision" class="main">
<header class="major">
<h2>Project Submission</h2>
</header>
<center>
<p><strong><u>DEADLINE: 25<sup>TH</sup> OCTOBER 2024</u></strong>
<p>Find the project submission form below. Please complete it if you would like to take part in this 3<sup>rd</sup> edition of the Hawkes HACKS! If you have any questions, do not hesitate to ask the organising team.<br>
<iframe src="https://fm.addxt.com/form/?vf=1FAIpQLSdLAGOOEWK_p7Gc-r-PEKlBxkVOiRN3uyvJL1qH1iyMO08MCg" width="100%" height="700" frameborder="0" marginheight="0" marginwidth="0">Loading…</iframe>
</center>
</section>
<!-- Projects -->
<section id="projects" class="main">
<header class="major">
<h2>Projects</h2>
</header>
<div class="container">
<div id="myCarousel" class="carousel slide" data-ride="carousel" data-interval="false">
<ol class="carousel-indicators">
<li data-target="#myCarousel" data-slide-to="0" class="active">1</li>
<li data-target="#myCarousel" data-slide-to="1"></li>
<li data-target="#myCarousel" data-slide-to="2"></li>
<li data-target="#myCarousel" data-slide-to="3"></li>
<li data-target="#myCarousel" data-slide-to="4"></li>
<li data-target="#myCarousel" data-slide-to="5"></li>
<li data-target="#myCarousel" data-slide-to="6"></li>
<li data-target="#myCarousel" data-slide-to="7"></li>
</ol>
<div class="carousel-inner">
<div class="carousel-item active">
<div class="carousel-title">
<table>
<tr>
<td><h4><b>Project A:</b></h4></td><td colspan="2"> <h4><b>Redefining Eye Tracking with Open-Source Real-Time AI Solutions</b></h4></td>
</tr>
<tr>
<td>Leader:</td> <td>Miguel Xochicale, Stephen Thompson</td> <td>Advanced Research Computing (ARC) Centre, UCL</td>
</tr>
</table>
</div>
<img class="rounded mx-auto d-block" src="images/project_A.gif" width="800">
<div class="carousel-description"><p align="justify">
End to end real-time AI workflows are challenging due to the orchestration of various stages: (1) data acquisition, curation, labelling, postprocessing, (2) model train, validation, optimisation and deployment and (3) design of appropriate graphical user interfaces.<br><br>
During two-days of hacking activities, instructors and participants aim to achieve four learning outcomes: (1) train and test light-weight UNET and UNET-ViT models for segmentation of the public MOBIOUS dataset using python-base libraries, (2) learn good software practices for contributing to open-source projects in compliance with medical device software standards (IEC 62304) using GitHub features, (3) optimise and deploy real-time models using NVIDIA’s AI sensor processing SDK for low latency streaming workflows (holoscan-sdk), and (4) work together to design a simple eye-tracking application or fun game that demonstrates the advantages of real-time AI workflows in analysing eye pathologies and movement.<br><br>
Our goal is to bring together researchers, engineers, and clinicians from various departments to collaborate on hacking end-to-end real-time AI workflows, including development, evaluation, and integration. We also aim to foster collaborations and contributions that could lead to co-authorship in the development of our real-time AI workflow.<br><br>
PS. Kindly send an email to [email protected] to request access to the GitHub repository.
</p>
</div>
</div>
<div class="carousel-item">
<div class="carousel-title">
<table>
<tr>
<td><h4><b>Project B:</b></h4></td><td colspan="2"> <h4><b>Getting MetricsReloaded ready to launch</b></h4></td>
</tr>
<tr>
<td>Leader:</td> <td>Carole Sudre</td> <td>UCL Hawkes Institute and MRC Unit for Lifelong Health and Ageing (LHA), UCL</td>
</tr>
</table>
</div>
<!-- <img class="rounded mx-auto d-block" src="images/project_MC.jpeg" width="800"> -->
<div class="carousel-description"><p align="justify">
MetricsReloaded is a collaborative scientific endeavour that recently published guidelines regarding the use of evaluation metrics in medical imaging processing tasks. We have developed a software tool to accompany these guidelines which is hosted on the MONAI repository. This includes the library of metrics but also the associated end-to-end evaluation pipelines.<br><br>
For people to use the software, we need to make it easy and reliable through<br>
1. Implementation of tests for the evaluation pipelines<br>
2. Creation of didactic jupyter notebooks to guide the user through instance segmentation and object detection evaluation procedures<br>
3. Improvement on the documentation to relate more closely to the guideline papers.<br>
This is the first hackathon of 3 across the UK, Germany and France with the MetricsReloaded collaborators. Ultimately we aim to release a (relatively) stable version of the work by January 2025 and will try and submit a paper detailing specifically the software capabilities to which active members of the hackathons could be invited to contribute.<br><br>
The MetricsReloaded software is the companion code to the guidelines described extensively <a href="https://www.nature.com/articles/s41592-023-02151-z">here</a> and the pitfalls explained <a href="https://www.nature.com/articles/s41592-023-02150-0">here</a>.
Code can be found at <a href="https://github.com/Project-MONAI/MetricsReloaded">https://github.com/Project-MONAI/MetricsReloaded</a>
</div>
</div>
<div class="carousel-item">
<div class="carousel-title">
<table>
<tr>
<td><h4><b>Project C:</b></h4></td><td colspan="2"> <h4><b>Adaptive Normative Modelling with Real-Time Quality Control Feedback</b></h4></td>
</tr>
<tr>
<td>Leader:</td> <td>Anthi Papouli, Kirsten Schroder</td> <td>Department of Computer Science, UCL</td>
</tr>
</table>
</div>
<img class="rounded mx-auto d-block" src="images/project_C.png" width="800">
<div class="carousel-description"><p align="justify">
Adaptive Normative Modelling with Real-Time Quality Control Feedback focuses on enhancing the reliability of brain MRI-based normative models by integrating dynamic quality control (QC) feedback. The main aim is to ensure accurate results from normative models, even when working with lower quality MRI scans, by conducting QC both before and after Freesurfer parsing. This approach allows adaptive handling of issues like motion artifacts or noise in real-time, reducing the need to discard valuable data. The challenge lies in creating a robust feedback loop that adjusts how the data is processed based on QC metrics. We will use brain MRI data, leveraging software such as Freesurfer/Fastsurfer, VisualQC and PyTorch/Tensorflow for potential future model adaptations.
</div>
</div>
<div class="carousel-item">
<div class="carousel-title">
<table>
<tr>
<td><h4><b>Project D:</b></h4></td><td colspan="2"> <h4><b>Automated Diaphragm Height Measurement from CT Chest Scans Using AI and Image Processing Techniques</b></h4></td>
</tr>
<tr>
<td>Leaders:</td> <td>Mehran Azimbagirad</td> <td>Department of Computer Science, UCL</td>
</tr>
</table>
</div>
<!-- <img class="rounded mx-auto d-block" src="images/project_ET.png" width="800"> -->
<div class="carousel-description"><p align="justify">
The diaphragm, a key respiratory muscle, plays a crucial role in lung function and can be affected by various pulmonary diseases. Accurate measurement of diaphragm height on chest CT scans can provide valuable insights into respiratory health and the diagnosis of conditions like chronic obstructive pulmonary disease (COPD) and diaphragm paralysis. <br><br> This project aims to develop an automated system for diaphragm height measurement using lung masks extracted from CT chest scans. By leveraging advanced image processing and AI techniques, the system will identify and segment the diaphragm, compute height measurements, and provide quantitative assessments. Machine learning or deep learning models can be trained to enhance the accuracy of diaphragm height estimation. The project aims to streamline the assessment of diaphragm function, reduce the need for manual measurements, and support clinicians in diagnosing respiratory conditions more efficiently.
</div>
</div>
<div class="carousel-item">
<div class="carousel-title">
<table>
<tr>
<td><h4><b>Project E:</b></h4></td><td colspan="2"> <h4><b>Accurate segmentation of pulmonary airways and vessels</b></h4></td>
</tr>
<tr>
<td>Leaders:</td> <td>Shanshan Wang</td> <td>UCL Centre for Medical Imaging</td>
</tr>
</table>
</div>
<!-- <img class="rounded mx-auto d-block" src="images/project_MX.png" width="800"> -->
<div class="carousel-description"><p align="justify">
Accurate segmentation of pulmonary airways and vessels is crucial for the diagnosis and treatment of pulmonary diseases. However, current deep learning approaches suffer from disconnectivity issues that hinder their clinical usefulness. <br><br> Topology repairing for vessels focuses on correcting discontinuities within segmented vessel structures in medical imaging. This approach addresses segmentation errors, such as disconnected vessel segments or small gaps, by reconstructing the topology of vascular networks. Through keypoint detection or graph-based techniques, the method identifies critical points or endpoints in segmented vessels and reconnects them to restore continuity. This process typically involves using a neural network to detect and predict connection points or applying algorithms to optimize connectivity across the network. Topology repair enhances the accuracy and anatomical coherence of segmented vessels, which is essential for reliable clinical analysis and diagnostics.
</div>
</div>
<div class="carousel-item">
<div class="carousel-title">
<table>
<tr>
<td><h4><b>Project F:</b></h4></td><td colspan="2"> <h4><b>Generating synthetic images of the colon to improve model generalisability</b></h4></td>
</tr>
<tr>
<td>Leaders:</td> <td>Ruaridh Gollifer</td> <td>Advanced Research Computing (ARC) Centre, UCL</td>
</tr>
</table>
</div>
<img class="rounded mx-auto d-block" src="images/project_F.png" width="800">
<div class="carousel-description"><p align="justify">
Blender is a free and open-source software for 3D geometry rendering. Uses include synthetic datasets generation which is of particular interest in the field of medical imaging, where often there is limited real data that can be used to train machine learning models. By creating large amounts of synthetic but realistic data, we can improve the performance of models in tasks such as polyp detection in image guided surgery. Synthetic data generation has other advantages since using tools like Blender gives us more control and we can generate a variety of ground truth data from segmentation masks to optic flow fields, which in real data would be very challenging to generate or would involve extensive time consuming manual labelling. Another advantage of this approach is that often we can easily scale up our synthetic datasets by randomising parameters of the modelled 3D geometry. There can be challenges to make the data realistic and representative of the real data.<br><br> This project would be using a previously created <a href="https://rdr.ucl.ac.uk/articles/dataset/Procedurally_Generated_Colonoscopy_and_Laparoscopy_Data_For_Improved_Model_Training_Performance/23843904">baseline dataset</a> of the colon and/or liver. This synthetic data provided will need some additional modification i.e. geometry, texture, lighting, and camera settings to make it more realistic. The challenge will be to generate synthetic data that is realistic enough to be useful for training models, while also being able to generate enough data to train the models effectively. A Blender add-on (or plug-in) under development for the purpose of data generation which would be a starting point for the project (<a href="https://github.com/UCL/Blender_Randomiser">https://github.com/UCL/Blender_Randomiser</a>) with the aim to check data quality and if time can pre-train models for polyp detection (YoloV7) and organ segmentation (UNet), with performance evaluated on open-source datasets.
</div>
</div>
<div class="carousel-item">
<div class="carousel-title">
<table>
<tr>
<td><h4><b>Project G:</b></h4></td><td colspan="2"> <h4><b>Classify heart rhythms on electrocardiogram images</b></h4></td>
</tr>
<tr>
<td>Leaders:</td> <td>Florence Townend, Marta Masramon Muñoz</td> <td>UCL Hawkes Institute, UCL</td>
</tr>
</table>
</div>
<img class="rounded mx-auto d-block" src="images/project_G.jpg" width="800">
<div class="carousel-description"><p align="justify">
Disclosure: This project is part of the <a href="https://www.kaggle.com/c/bhf-data-science-centre-ecg-challenge/overview">BHF Data Science Centre ECG Challenge</a>. <br><br>The aim of this challenge is to create models which can be used to identify and classify important disorders of the heart from electrocardiogram (ECG, EKG) images. The dataset includes synthetic images of ECGs which simulate artefacts which might occur when taking a photo of the ECG with a smart phone. Models to automate diagnoses from these images could be used to speed up heart disease care around the world.<br><br>This challenge will use the GenECG dataset. This dataset contains synthetically generated ECG images which have been created to incorporate artefacts common on paper ECG images which are typically scanned or photographed. The wave form data to create this dataset is part of the PTB-XL dataset containing diagnoses that have been adjudicated by two cardiologists. The GenECG dataset contains 21,799 ECG images which have passed a clinical Turing test where expert observers could not distinguish between synthetic and real ECGs.<br><br>This challenge will focus on the identification of five important heart disease diagnoses. These include myocardial infarction, atrial fibrillation, hypertrophy, conduction disturbance, and ST/T change. Further information on these diagnoses can be found in the data section.
</div>
</div>
</div>
<a class="carousel-control-prev" href="#myCarousel" role="button" data-slide="prev">
<span class="carousel-control-prev-icon" aria-hidden="true"></span>
<span class="sr-only">Previous</span>
</a>
<a class="carousel-control-next" href="#myCarousel" role="button" data-slide="next">
<span class="carousel-control-next-icon" aria-hidden="true"></span>
<span class="sr-only">Next</span>
</a>
</div>
</div>
<br>
</section>
<section id="program" class="main">
<header class="major">
<h2>Program</h2>
</header>
<center>
<a href="images/program.png" class="button primary"><b>Download</b></a><br><br>
<object data="images/program.png" width=70%> </object><br>
<a href="images/program.png" class="button primary"><b>Download</b></a>
</center>
</section>
<!-- Location -->
<section id="location" class="main">
<header class="major">
<h2>Location</h2>
</header>
<center>
<p>The hackathon is a purely <strong>in-person</strong> event which will take place in Rooms A, G and H at The Wolfson Centre - UCL GOSICH (UCL Great Ormond Street Institute of Child Health). The centre is located in Coram's Fields, accessible through the following address: <b>Mecklenburgh Square, WC1N 2AD, London, UK</b>. Room A is located in the Ground Floor and Rooms G and H in the First Floor.<br><br>
<p class="content" style="display:flex; flex-wrap:nowrap">
<iframe width="100%" height="350" frameborder="0" scrolling="no" marginheight="0" marginwidth="0" loading="lazy"
src="https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d1043.6987434606604!2d-0.12060602502261317!3d51.52517774818245!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x48761b38226fe5ff%3A0x9657cf9c642d5c9!2sUCL%20ICH%20Wolfson%20Centre!5e0!3m2!1sen!2suk!4v1728055982846!5m2!1sen!2suk">
</iframe>
<a href="https://www.ucl.ac.uk/estates/roombooking/building-location/?id=240"> <img src="images/GOSICH.jpg" height="350" alt=""/></a><br>
<br><br>
<p id="90HH">On Wednesday and Thursday, participants are invited to socialise with food and drinks at the end of day. On Wednesday, these will be hosted at the UCL Hawkes Institute (formerly UCL Centre for Medical Image Computing (CMIC)), located at <b>First Floor 90 High Holborn, WC1V 6LJ, London, UK</b>. On Thursday, the social activity will be held at The Lamb, a pub located at <b>94 Lamb's Conduit St, WC1N 1EA, London, UK.</b><br><br>
<p class="content" style="display:flex; flex-wrap:nowrap">
<iframe width="100%" height="350" frameborder="0" scrolling="no" marginheight="0" marginwidth="0" loading="lazy"
src="https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d2482.740679810379!2d-0.12055178850246416!3d51.51797337169815!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x48761b3585a9c137%3A0xe585112c24811eb3!2s90%20High%20Holborn%2C%20London%20WC1V%206BH!5e0!3m2!1sen!2suk!4v1729269697957!5m2!1sen!2suk">
</iframe>
<img src="images/holborn.png" height="350" alt=""/>
<p class="content" style="display:flex; flex-wrap:nowrap">
<iframe width="100%" height="350" frameborder="0" scrolling="no" marginheight="0" marginwidth="0" loading="lazy"
src="https://www.google.com/maps/embed?pb=!1m14!1m8!1m3!1d9929.8449386406!2d-0.119017!3d51.5230996!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x48761b37b5be9f89%3A0xd5762155e1f9fb8!2sThe%20Lamb!5e0!3m2!1sen!2suk!4v1729269875183!5m2!1sen!2suk">
</iframe>
<img src="images/thelamb.png" height="350" alt=""/>
</center>
</section>
</div>
<p style="margin-bottom:0.3cm;">
<center>
<a href="https://www.eventbrite.co.uk/e/hawkes-hacks-tickets-1046969793837?utm-campaign=social&utm-content=attendeeshare&utm-medium=discovery&utm-term=listing&utm-source=cp&aff=ebdsshcopyurl" class="button primary"><b>Registration</b></a>
<center>
<!-- Footer -->
<footer id="footer">
This event is supported by the <h2>UCL Computer Science Strategic Research Fund </h2> <br>
<ul class="icons">
<span class="logo"><img src="images/ucl_logo.svg" width=15% alt="" /></span> 
<span class="logo"><img src="images/cmic_logo.svg" width=15% alt="" /></span> 
<span class="logo"><img src="images/weiss_logo.png" width=15% alt="" /></span>
</ul>
</footer>
</center>
<!-- Scripts -->
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js"></script>
<script src="bootstrap/js/jquery.min.js"></script>
<script src="bootstrap/js/bootstrap.min.js"></script>
<script src="assets/js/jquery.min.js"></script>
<script src="assets/js/jquery.scrollex.min.js"></script>
<script src="assets/js/jquery.scrolly.min.js"></script>
<script src="assets/js/browser.min.js"></script>
<script src="assets/js/breakpoints.min.js"></script>
<script src="assets/js/util.js"></script>
<script src="assets/js/main.js"></script>
</body>
<style>
.carousel-description {
background-color: #f7f7f7;
padding: 20px;}
/* .carousel {
height: 930px;}*/
.carousel-control-prev-icon, .carousel-control-next-icon {
height: 50px;
width: 50px;
filter: invert(50%);
background-size: 100%, 100%;
border-radius: 100%;}
.carousel-indicators li {
color: #e5e5e5;
width: 15px;
height: 15px;
top: 40px;
border-radius: 15px;
background-color: #e5e5e5;}
.carousel-indicators .active {
background-color: #8cc9f0;}
.carousel-control-prev-icon {
margin-left: -0%;}
.carousel-control-next-icon {
margin-right: -0%;}
.inline-block {
display: inline-block;}
</style>
</html>