We begin our paper by interrogating of the concept of "depth" within deep learning. We argue that the term has a triple meaning: knowledgeable, the accuracy displayed in the model's ability to excel in certain machine learning tasks, layered, a visualization of the learned hierarchical structures (Figures 1-2), and impenetrable, the inherent lack of interpretability and understanding (such as in the “deep sea” or “deep space”) of their algorithmic operations. By illustrating the implications of each of these meanings, we show that all three are intricately linked to each other. Building off of the unavoidable interdependence between these elements of deep learning, several concrete outcomes emerge from our characterization of deep learning models: (1) The need to introduce the concept of a deep problem, (2) treating embeddings as objects of study, and (3) the need to train scholars from a wide range of fields in the technology of deep learning.
If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.
Hosted at Carleton University, Université d'Ottawa (University of Ottawa)
Ottawa, Ontario, Canada
July 20, 2020 - July 25, 2020
475 works by 1078 authors indexed
Conference cancelled due to coronavirus. Online conference held at https://hcommons.org/groups/dh2020/. Data for this conference were initially prepared and cleaned by May Ning.
Conference website: https://dh2020.adho.org/
Series: ADHO (15)