diff --git a/src/content/images/week7/1.png b/src/content/images/week7/1.png new file mode 100644 index 0000000..b2e95ec Binary files /dev/null and b/src/content/images/week7/1.png differ diff --git a/src/content/images/week7/10.png b/src/content/images/week7/10.png new file mode 100644 index 0000000..2b57298 Binary files /dev/null and b/src/content/images/week7/10.png differ diff --git a/src/content/images/week7/11.png b/src/content/images/week7/11.png new file mode 100644 index 0000000..02a034e Binary files /dev/null and b/src/content/images/week7/11.png differ diff --git a/src/content/images/week7/12.png b/src/content/images/week7/12.png new file mode 100644 index 0000000..8f026c6 Binary files /dev/null and b/src/content/images/week7/12.png differ diff --git a/src/content/images/week7/13.png b/src/content/images/week7/13.png new file mode 100644 index 0000000..61db80e Binary files /dev/null and b/src/content/images/week7/13.png differ diff --git a/src/content/images/week7/14.png b/src/content/images/week7/14.png new file mode 100644 index 0000000..acc3154 Binary files /dev/null and b/src/content/images/week7/14.png differ diff --git a/src/content/images/week7/15.png b/src/content/images/week7/15.png new file mode 100644 index 0000000..d045561 Binary files /dev/null and b/src/content/images/week7/15.png differ diff --git a/src/content/images/week7/16.png b/src/content/images/week7/16.png new file mode 100644 index 0000000..c668c80 Binary files /dev/null and b/src/content/images/week7/16.png differ diff --git a/src/content/images/week7/17.png b/src/content/images/week7/17.png new file mode 100644 index 0000000..af0319e Binary files /dev/null and b/src/content/images/week7/17.png differ diff --git a/src/content/images/week7/18.png b/src/content/images/week7/18.png new file mode 100644 index 0000000..52b9bf0 Binary files /dev/null and b/src/content/images/week7/18.png differ diff --git a/src/content/images/week7/19.png b/src/content/images/week7/19.png new file mode 100644 index 0000000..2c6ce39 Binary files /dev/null and b/src/content/images/week7/19.png differ diff --git a/src/content/images/week7/2.png b/src/content/images/week7/2.png new file mode 100644 index 0000000..8aa7808 Binary files /dev/null and b/src/content/images/week7/2.png differ diff --git a/src/content/images/week7/20.png b/src/content/images/week7/20.png new file mode 100644 index 0000000..997ae9b Binary files /dev/null and b/src/content/images/week7/20.png differ diff --git a/src/content/images/week7/21.png b/src/content/images/week7/21.png new file mode 100644 index 0000000..0b0f125 Binary files /dev/null and b/src/content/images/week7/21.png differ diff --git a/src/content/images/week7/22.png b/src/content/images/week7/22.png new file mode 100644 index 0000000..ea15674 Binary files /dev/null and b/src/content/images/week7/22.png differ diff --git a/src/content/images/week7/23.png b/src/content/images/week7/23.png new file mode 100644 index 0000000..c3dd38f Binary files /dev/null and b/src/content/images/week7/23.png differ diff --git a/src/content/images/week7/24.png b/src/content/images/week7/24.png new file mode 100644 index 0000000..4a1b012 Binary files /dev/null and b/src/content/images/week7/24.png differ diff --git a/src/content/images/week7/25.png b/src/content/images/week7/25.png new file mode 100644 index 0000000..7d8adc3 Binary files /dev/null and b/src/content/images/week7/25.png differ diff --git a/src/content/images/week7/26.png b/src/content/images/week7/26.png new file mode 100644 index 0000000..cca55cd Binary files /dev/null and b/src/content/images/week7/26.png differ diff --git a/src/content/images/week7/27.png b/src/content/images/week7/27.png new file mode 100644 index 0000000..415df20 Binary files /dev/null and b/src/content/images/week7/27.png differ diff --git a/src/content/images/week7/28.png b/src/content/images/week7/28.png new file mode 100644 index 0000000..66a2bc4 Binary files /dev/null and b/src/content/images/week7/28.png differ diff --git a/src/content/images/week7/29.png b/src/content/images/week7/29.png new file mode 100644 index 0000000..e6f5983 Binary files /dev/null and b/src/content/images/week7/29.png differ diff --git a/src/content/images/week7/3.png b/src/content/images/week7/3.png new file mode 100644 index 0000000..93c97d4 Binary files /dev/null and b/src/content/images/week7/3.png differ diff --git a/src/content/images/week7/30.png b/src/content/images/week7/30.png new file mode 100644 index 0000000..e1a5649 Binary files /dev/null and b/src/content/images/week7/30.png differ diff --git a/src/content/images/week7/31.png b/src/content/images/week7/31.png new file mode 100644 index 0000000..fd3c123 Binary files /dev/null and b/src/content/images/week7/31.png differ diff --git a/src/content/images/week7/4.png b/src/content/images/week7/4.png new file mode 100644 index 0000000..3e13322 Binary files /dev/null and b/src/content/images/week7/4.png differ diff --git a/src/content/images/week7/5.png b/src/content/images/week7/5.png new file mode 100644 index 0000000..f8c940a Binary files /dev/null and b/src/content/images/week7/5.png differ diff --git a/src/content/images/week7/6.png b/src/content/images/week7/6.png new file mode 100644 index 0000000..1d6f479 Binary files /dev/null and b/src/content/images/week7/6.png differ diff --git a/src/content/images/week7/7.png b/src/content/images/week7/7.png new file mode 100644 index 0000000..04c4570 Binary files /dev/null and b/src/content/images/week7/7.png differ diff --git a/src/content/images/week7/8.png b/src/content/images/week7/8.png new file mode 100644 index 0000000..6a43365 Binary files /dev/null and b/src/content/images/week7/8.png differ diff --git a/src/content/images/week7/9.png b/src/content/images/week7/9.png new file mode 100644 index 0000000..0abdeb8 Binary files /dev/null and b/src/content/images/week7/9.png differ diff --git a/src/content/images/week7/A.JPG b/src/content/images/week7/A.JPG new file mode 100644 index 0000000..bd2f303 Binary files /dev/null and b/src/content/images/week7/A.JPG differ diff --git a/src/content/images/week7/B.JPG b/src/content/images/week7/B.JPG new file mode 100644 index 0000000..2190167 Binary files /dev/null and b/src/content/images/week7/B.JPG differ diff --git a/src/content/images/week7/C.JPG b/src/content/images/week7/C.JPG new file mode 100644 index 0000000..1f79fa2 Binary files /dev/null and b/src/content/images/week7/C.JPG differ diff --git a/src/content/images/week7/CA.JPG b/src/content/images/week7/CA.JPG new file mode 100644 index 0000000..9c03abe Binary files /dev/null and b/src/content/images/week7/CA.JPG differ diff --git a/src/content/images/week7/CB.JPG b/src/content/images/week7/CB.JPG new file mode 100644 index 0000000..0f82b29 Binary files /dev/null and b/src/content/images/week7/CB.JPG differ diff --git a/src/content/images/week7/D.JPG b/src/content/images/week7/D.JPG new file mode 100644 index 0000000..f491965 Binary files /dev/null and b/src/content/images/week7/D.JPG differ diff --git a/src/content/images/week7/E.JPG b/src/content/images/week7/E.JPG new file mode 100644 index 0000000..c2182b4 Binary files /dev/null and b/src/content/images/week7/E.JPG differ diff --git a/src/content/images/week7/F.JPG b/src/content/images/week7/F.JPG new file mode 100644 index 0000000..681420c Binary files /dev/null and b/src/content/images/week7/F.JPG differ diff --git a/src/content/images/week7/G.JPG b/src/content/images/week7/G.JPG new file mode 100644 index 0000000..24862ba Binary files /dev/null and b/src/content/images/week7/G.JPG differ diff --git a/src/content/images/week7/H.JPG b/src/content/images/week7/H.JPG new file mode 100644 index 0000000..e628acb Binary files /dev/null and b/src/content/images/week7/H.JPG differ diff --git a/src/content/images/week7/I.JPG b/src/content/images/week7/I.JPG new file mode 100644 index 0000000..3b2b6e4 Binary files /dev/null and b/src/content/images/week7/I.JPG differ diff --git a/src/content/images/week7/J.JPG b/src/content/images/week7/J.JPG new file mode 100644 index 0000000..dbf14c5 Binary files /dev/null and b/src/content/images/week7/J.JPG differ diff --git a/src/content/images/week7/K.JPG b/src/content/images/week7/K.JPG new file mode 100644 index 0000000..49cafc0 Binary files /dev/null and b/src/content/images/week7/K.JPG differ diff --git a/src/content/images/week7/L.JPG b/src/content/images/week7/L.JPG new file mode 100644 index 0000000..98673c1 Binary files /dev/null and b/src/content/images/week7/L.JPG differ diff --git a/src/content/images/week7/M.JPG b/src/content/images/week7/M.JPG new file mode 100644 index 0000000..1529bed Binary files /dev/null and b/src/content/images/week7/M.JPG differ diff --git a/src/content/images/week7/N.JPG b/src/content/images/week7/N.JPG new file mode 100644 index 0000000..9c8a946 Binary files /dev/null and b/src/content/images/week7/N.JPG differ diff --git a/src/content/images/week7/O.JPG b/src/content/images/week7/O.JPG new file mode 100644 index 0000000..8164229 Binary files /dev/null and b/src/content/images/week7/O.JPG differ diff --git a/src/content/images/week7/P.JPG b/src/content/images/week7/P.JPG new file mode 100644 index 0000000..b44aeef Binary files /dev/null and b/src/content/images/week7/P.JPG differ diff --git a/src/content/images/week7/Q.JPG b/src/content/images/week7/Q.JPG new file mode 100644 index 0000000..19de5ff Binary files /dev/null and b/src/content/images/week7/Q.JPG differ diff --git a/src/content/images/week7/R.JPG b/src/content/images/week7/R.JPG new file mode 100644 index 0000000..a83826c Binary files /dev/null and b/src/content/images/week7/R.JPG differ diff --git a/src/content/images/week7/S.JPG b/src/content/images/week7/S.JPG new file mode 100644 index 0000000..12dc267 Binary files /dev/null and b/src/content/images/week7/S.JPG differ diff --git a/src/content/images/week7/T.JPG b/src/content/images/week7/T.JPG new file mode 100644 index 0000000..158205a Binary files /dev/null and b/src/content/images/week7/T.JPG differ diff --git a/src/content/images/week7/U.JPG b/src/content/images/week7/U.JPG new file mode 100644 index 0000000..27da306 Binary files /dev/null and b/src/content/images/week7/U.JPG differ diff --git a/src/content/post/week7.md b/src/content/post/week7.md new file mode 100644 index 0000000..ab90075 --- /dev/null +++ b/src/content/post/week7.md @@ -0,0 +1,393 @@ ++++ +date = "16 Oct 2023" +draft = true +title = "Week 7: GANs and DeepFakes" +slug = "week6" ++++ + +(see bottom for assigned readings and questions) + +# GANS and DeepFakes (Week 7) + +Presenting Team: Aparna Kishore, Elena Long, Erzhen Hu, Jingping Wan + +Blogging Team: Haochen Liu, Haolin Liu, Ji Hyun Kim, Stephanie Schoch, Xueren Ge +# Monday, October 9th: Generative Adversarial Network and DeepFakes + +
+ +

Today's topic is how to utilize generative adversarial networks to create fake images and how to identify the images generated by these models.

+
+
+ +

Generative Adversarial Network (GAN) is a revolutionary deep learning framework that pits two neural networks against each other in a creative showdown. One network, the generator, strives to produce realistic data, such as images or text, while the other, the discriminator, aims to differentiate between genuine and generated data. Through a continuous feedback loop, GANs refine their abilities, leading to the generation of increasingly convincing and high-quality content.

+
+
+ +

To ensure students have a better understanding of GANs. The leading team held a “GAN Auction Game” to simulate the generating and predicting process of the generator and discriminator in GAN. In this game, students are divided into two groups (Group 1 and Group 2). Group 1 will provide three items (e.g. the name of a place) while Group 2 tries to identify whether the items provided are real or fake. In that case, Group 1 plays as the generator while Group 2 serves as the discriminator. During the game, Group 2 will assign a price to each item Group 1 proposes. A correct assessment of an item's veracity earns the corresponding price, whereas a misjudgment causes a loss of the same amount. After two rounds, the groups exchange roles and play for two more rounds. The detailed process of the game is summarized in the following table.

+
+
+ + +
+
+ +

This game captures the training process of GANs where the generator first proposes certain contents (e.g. images or contexts) and the discriminator is trained to identify the real content. If the generator successfully creates contents that cheat the discriminator, it will receive a high reward for further tuning. On the other hand, if the discriminator correctly identifies the content created by the generator. It will also get a large reward (the price in this auction game). This iterative training process can be explained by the figures below.

+
+
+ + +
+
+ + +
+
+ + +
+
+ +

Formally, the training process can be modeled as a two-player zero-sum game by conducting min-max optimization on the objective function.A Nash equilibrium will be established between generator and discriminator.

+
+
+ +

The detailed training algorithm is shown here, in which the discriminator and generator update iteratively.

+
+
+ +

Generally speaking, for a system that only has generators and discriminators, it is hard to tell whether they are doing well because there are many bad local optima. Thus, one direct way is to introduce human feedback for evaluating. For example, we can borrow strategies from Large Language Models (LLMs), particularly employing Reinforcement Learning from Human Feedback (RLHF). In this method, experts would iteratively rank the generated samples, offering direct reinforcement signals to improve the generator's output. This approach could enhance the realism and semantic alignment of the content created by GANs. However, the RLHF method has its drawbacks, primarily the extensive need for expert involvement, raising concerns about its scalability in larger evaluations. An alternative could be the inclusion of non-expert users, offering a broader range of feedback. Crowdsourcing and user studies are suggested as methods to understand if the generated content meets the target audience's needs and preferences.

+

For images or tabular data, when the data distribution is roughly known, inception score serves as an useful metric. This score calculates the KL divergence between the conditional class distribution and the marginal class distribution of generated samples. A higher IS indicates clearer and more diverse images. However, it doesn't always correlate with human judgment.

+
+
+ +

Generative Adversarial Networks (GANs) also face crucial challenges:

+
    +
  1. Vanishing/Exploding Gradient: During backpropagation, gradients can shrink (vanish) or grow excessively (explode), disrupting learning. Vanishing gradients stall the network's learning, as parameter updates become negligible. Exploding gradients cause extreme, destabilizing updates, hindering the model's convergence.

    +
  2. +
  3. Mode Collapse: GANs can suffer from mode collapse, where the generator produces limited, similar samples, failing to represent the data's true diversity. This occurs when the generator exploits the discriminator's weaknesses, concentrating on certain data aspects and neglecting others. It compromises the GAN's objective of generating diverse, realistic samples, indicating a breakdown in adversarial learning.

    +
  4. +
+
+
+ +

After the general introduction of GANs, the leading team then focuses on utilizing GANs to generate realistic images, which is regarded as one of the most important applications of GANs. GANs are very powerful to generate fake images.

+
+
+ +

A warm-up game is to identify the fake person that is generated by GANs.

+
+
+ +

In the above figure, one of the two faces is fake but it is difficult to identify at first glance.

+
+
+ +

To successfully identify fake images, there are several methods that either use deep-learning-based models to learn to identify fake samples, or through direct observation by people. The leading team then introduces three interesting methods that enable us to tell the difference. We will revisit these two faces later and now focus on the detailed methods to do general identification.

+
+
+ +

The first method is utilizing deep learning which discover underlying patterns that are more likely to be generated by GAN.

+
+
+ +

For example, images generated by GANs tend to contain color artifacts or invisible artifacts that can be identified by deep learning models.

+
+
+ +

The second method is physical-based. Namely, the corneal specular highlights for the real face have strong similarities while those for the GAN-faces are different.

+
+
+ +

The third method is physiological-based. Specifically, the pupils for the real eyes have strong circular shapes while the GAN-generated pupils usually have irregular shapes.

+
+
+ +

With the help of these methods, we can say that the left woman in the figure we showed before is fake. This can be justified by color artifacts of GAN-image identified from deep learning and her irregular pupils.

+
+
+ +

The leading team also believes that these identification methods can be escaped by more advanced image-generating models but new methods will also be proposed accordingly to distinguish images generated by these advanced models. The generation and identification will evolve together.

+
+
+ +

In summary, generative models such as GANs have fundamentally transformed people's lives, and there remains a substantial amount of future research and development ahead. Some future directions are listed above.

+
+ +# Wednesday, October 11th: Creation and Detection of DeepFake Videos + +
+ +

Today’s topic is creation and detection of deepfake videos.

+
+
+ +

There are three aspects to be covered: +

    +
  1. Introduction to deepfake videos
  2. +
  3. Detecting Face-swap deepfakes with temporal dynamics
  4. +
  5. Discussion
  6. +
+

+
+
+ +

Definition of a deepfake: A deceptive image or recording that distorts reality to deceive. +

+
+
+ +

Ways of generating deepfakes: Generative models (GANS, diffusion models, etc.) +

+
+
+ +

There are some side effects of face swap methods, including +

+

+
+
+ +

The presenters introduce three different methods of generating deepfake videos: +

    +
  1. Reenactment
  2. +
  3. Lip-sync deepfakes
  4. +
  5. Text-based deepfake synthesis
  6. +
+

+
+
+ +

Reenactment: A deepfake reenacts using source images to manipulate the target. +

+
+
+ +

Example of a reenactment: the mouth movement in Trump's video is animated by a source actor. +

+
+
+ +

Here is another example of Reenactment, where the dancing in target video is animated by a source actor.

+
+
+ +

Three main steps of reenactment: +

    +
  1. The first step is tracking facial features in both source and target videos.
  2. +
  3. A consistency measure aligns input video features with a 3D face model.
  4. +
  5. Expressions are transferred from source to target with refinement for realism.
  6. +
+

+
+
+ +

Difference between face swap and reenactment: +

+

+
+
+ +

Most methods use RGB images, while lip-sync relies on audio input. +

+

+
+
+ +

Text-based methods modify videos per word, **phonemes** and **visemes** are key for pronunciation and analysis. Text edits are matched with phoneme sequences in the source video. Parameters of the 3D head model are used to smooth lip motions. +

+
+
+ +

While previous works have done a lot, an overlooked aspect in the creation of these deep-fake videos is the human ear. Here is one recent work trying to tackle this problem from the aspect of ear. +

+
+
+ +

In response to deepfake generally and manipulated content, roughly three types of authentication techniques are proposed: +

+

+
+
+ +

Today, our focus is on forensic methods to detect deep fakes. These methods can be categorized into low- and high-level approaches. +

+
+
+ +

The presenters asked the class to find criterias to identify an authentic picture of Tom Cruise. In the discussion, several factors were highlighted: +

+ During the class poll to determine which image appeared authentic, the majority of students voted for the second image, with a few supporting the first, and none voting for the third. Surprisingly, contrary to the class’s expectations, it was revealed that the first image was genuine, while the others were crafted by a TikTok user in creating deep fake content. +

+
+
+ +

Many deep fake videos emphasizing facial expressions often neglect the intricate movements of the human ear and the corresponding changes that occur in jaw movements. +

+
+
+ +

The aural dynamics system tracks and annotates ear landmarks, utilizing averaged local aural motion to simulate both horizontal and vertical movements, mirroring those of a real person. +

+
+
+ +

With the videos of Joe Biden, Angela Merkel, Donald Trump, and Mark +Zuckerberg, they used GAN to synthesize the mouth region of individuals to match the new audio track and generate a lip-sync video. +

+
+
+ +

The graphs are a distribution of the correlation of horizontal motion of three aural areas and audio (left) and lip vertical distance (right).

+

Fake ones have no correlation, where individuals have strong correlation that are not necessarily consistent.

+
+
+ +

The horizontal movement of the tragus and lobule parts of Trump’s ears exhibited a positive correlation, distinguishing it as a distinctive personal trait, unlike the general pattern observed on others. +

+
+
+ +

The table shows the performance of each model. Models with person-specific training show a higher average testing accuracy. +

+
+
+ +

Question 1: Limitations of the proposed methods & possible improvements +

+

+
+
+ +

As mentioned, there are drawbacks such as when hair is hiding the movement of ears, large head movement, and accurate ear tracking is difficult. Still, more facial and audio signals can be further studied. +

+
+
+ +

Question 2: Anomalies found from deep fake videos +

+

+
+
+ +

The speaker of the first video does not blink for over 6 seconds, which is impossible. The average resting blinking rate should be 0.283 per second. +

+
+
+ +

The second speaker’s lips are not closing for ‘m,’ ‘b,’ and ‘p’ (Phonemes-Visemes). +

+
+
+ +

Human pulse and respiratory motions are imperceptible to the human eye. Amplifying these factors could serve as a method for detecting generated videos. +

+

**Note:** Originally designed for medical purposes, aiming to identify potential health risks.

+
+
+ +

+

+

+
+
+ +

Both the technology and ways to detect deep-fake videos will continue to advance. +However, it requires more than simply trying to generate and identify them. +By using watermarks, deep-fake videos can be distinguished from the source. Furthermore, public education on teaching importance on collecting information from the correct source and further government regulations can be considered. +

+
+ +# Readings + +### For the first class (10/9) + +- Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio. [_Generative adversarial nets_](https://arxiv.org/abs/1406.2661). 2014. +- Xin Wang, Hui Guo, Shu Hu, Ming-Ching Chang, Siwei Lyu. [_GAN-generated Faces Detection: A survey and new perspectives_](https://arxiv.org/abs/2202.07145). 2022. + +### For the second class (10/11) + +- Shruti Agarwal and Hany Farid. [_Detecting deep-fake videos from aural and oral dynamics_](https://openaccess.thecvf.com/content/CVPR2021W/WMF/html/Agarwal_Detecting_Deep-Fake_Videos_From_Aural_and_Oral_Dynamics_CVPRW_2021_paper.html). CVPR 2023. + +## Optional Additional Readings + +- Dilrukshi Gamage, Piyush Ghasiya, Vamshi Krishna Bonagiri, Mark E Whiting, and Kazutoshi Sasahara. [_Are Deepfakes Concerning? Analyzing Conversations of Deepfakes on Reddit and Exploring Societal Implications_](https://dl.acm.org/doi/10.1145/3491102.3517446). In CHI Conference on Human Factors in Computing Systems (CHI ’22), April 29-May 5, 2022, New Orleans, LA, USA. ACM, New York, NY, USA, 19 pages. +- Sina Alemohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hossein Babaei, Daniel LeJeune, Ali Siahkoohi, Richard G. Baraniuk. [_Self-Consuming Generative Models Go MAD_](https://arxiv.org/abs/2307.01850). +- Momina Masood, Marriam Nawaz, Khalid Mahmood Malik, Ali Javed, Aun Irtaza. [_Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward_](https://arxiv.org/abs/2103.00484). Applied Intelligence, June 2022. + +### On GAN Training + +- Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). [_Improved techniques for training gans_](https://papers.nips.cc/paper_files/paper/2016/file/8a3363abe792db2d8761d6403605aeb7-Paper.pdf) +- Arjovsky, M., & Bottou, L. (2017). [_Towards principled methods for training generative adversarial networks_](https://openreview.net/pdf?id=Hk4_qw5xe) +- Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., & Anderson, R. (2023). [_The Curse of Recursion: Training on Generated Data Makes Models Forget_](https://arxiv.org/abs/2305.17493) + +### Blogs and Tutorials + +- [_How Generative Adversarial Network Works_](https://medium.datadriveninvestor.com/how-generative-adversial-network-works-3ddce0062b9) +- [_Increasing threat of deepfake identities_](https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf) + +# Discussion Questions + +**For Monday's class:** (as usual, post your response to at least one of these questions, or respond to someone else's response, or post anything you want that is interesting and relevant, before 8:29pm on Sunday, 8 October) + +1. How might the application of GANs extend beyond image generation to other domains, such as text, finance, healthcare (or any other domain that you can think of) and what unique challenges might arise in these different domains? How can the GAN framework ensure fairness, accountability, and transparency in these applications? +2. Considering the challenges in evaluating the performance and quality of GANs, how might evaluation metrics or methods be developed to assess the quality, diversity, and realism of the samples generated by GANs in a more robust and reliable manner? Additionally, how might these evaluation methods account for different types of data (e.g., images, text, tabular +etc.) and various application domains? +3. The authors identify 2 methods of detecting GAN-based images: physical and physiological. Is it possible that we can train a new model to modify a GAN-based image to hide these seemingly obvious flaws, like the reflection and pupil shapes? Will this approach quickly invalidate these two methods? +4. Do you agree with the authors that deep-learning based methods lack interpretability? Is the visible or invisible patterns detected by DL models really not understandable or explainable? + +**Questions for Wednesday's class:** (post response by 8:29pm on Tuesday, 10 October) + +1. What are the potential applications for the techniques discussed in the Agarwal and Farid paper beyond deep-fake detection, such as in voice recognition or speaker authentication systems? +2. How robust are the proposed ear analysis methods to real-world conditions like different head poses, lighting, occlusion by hair? +3. What are your ideas for other ways to detect deepfakes? +4. Deepfake detection and generation seems similar to many other "arms races" between attackers and defenders. How do you see this arms race evolving? Will there be an endpoint with one side clearly winning? \ No newline at end of file