Polski English Login or register account. Additional information Data set: ieee. Publisher IEEE. You have to log in to notify your friend by e-mail Login or register account. Learn more. David Chisnall Principal Researcher. Hongyan Xia Researcher. For more information, please see her webpage at here. Abstract: As machine learning is increasingly being deployed in real world applications, it has become critical to ensure that stakeholders understand and trust these models.
End users must have a clear understanding of the model behavior so they can diagnose errors and potential biases in these models, and decide when and how to employ them. However, most accurate models that are deployed in practice are not interpretable, making it difficult for users to understand where the predictions are coming from, and thus, difficult to trust. In this talk, I will discuss several popular post hoc explanation methods, and shed light on their advantages and shortcomings.
I will conclude the tutorial by highlighting open research problems in the field. Speaker Bio: Hima Lakkaraju is an Assistant Professor at Harvard University focusing on explainability, fairness, and robustness of machine learning models. She has also been working with various domain experts in criminal justice and healthcare to understand the real world implications of explainable and fair ML. For more information, please visit here. Abstract: Resilience to hardware failures is a key challenge as well as a top priority for deep learning DL accelerators, which have been deployed in a wide range of application domains, from edge computing, self-driving cars, to cloud servers.
Although DL workloads exhibit certain tolerance to errors, such tolerance alone cannot guarantee that a DL accelerator will meet the resilience requirement of a target application in the presence of hardware errors.
In this talk, I will first present a resilience analysis framework, which takes advantage of the architectural properties of DL accelerators to accurately and quickly analyze the behavior of hardware errors in these accelerators. The key findings of this study will be discussed. Finally, I will share our insights on how to design resilient DL accelerators. Prior to joining University of Chicago, she was a senior research scientist at Intel Labs. Professor Li received her Ph.
Abstract: Hardware has a crucial role in both ML and security. On the one hand, recent advances in Deep Learning DL fueled by platform capabilities have enabled a paradigm shift to include machine intelligence in a wide range of autonomous tasks. As a result, a largely unexplored surface has opened up for attacks jeopardizing the integrity of DL models and hindering their ubiquitous deployment across various intelligent applications. On the other hand, DL-based algorithms are also being employed for identifying several security vulnerabilities on long streams of multi-modal data and logs, including hardware logs.
Abstract: Advances in machine learning, notably deep learning, have led to computers matching or surpassing human performance in several cognitive tasks including vision, speech and natural language processing.
Software-defined capabilities. Intelligent monitoring and control systems are deployed to automatically transform the network, compute, and storage resources per the architecture policies. End-users may define their requirements pertaining to resource provisioning and server deployment, while the intelligent control systems will take on the responsibility of configuring the underlying infrastructure and managing the virtualized resources.
Management services. At the infrastructure management level, SDI may involve the user-interface to define parameters such as SLA performance, availability, scalability, and elasticity. IT admins or internal IT users may also request provisioning of resources.
The management services layer will take care of all infrastructure operations necessary to ensure that desired standards of SLA and performance are maintained. Common attributes may include the following: Intelligent virtualization SDI should aim to: Enhance the portability of IT workloads Remove dependencies from the underlying infrastructure While virtualization and layers of abstraction are necessary, an effective SDI infrastructure is composed of strong intelligence capabilities to orchestrate the infrastructure resources and architecture for maximum performance and reliability.
Software-driven innovation Software-centric SDI strategy focuses on using commercial off the shelf hardware instead of investing in proprietary and customized hardware solutions.
Modular design Adaptability, a key attribute of an effective SDI strategy, is enabled by introducing modularity in the design of the software architecture.
To achieve modularity, look at techniques such as: Software Oriented Architecture SOA design Microservices Context awareness Legacy infrastructure architecture may not be designed to collect information on context such as incidents, triggers, warning, events or other parameters from related infrastructure components. Performance focused Organizations may assess the performance in terms of the availability, security, and compliance posture of the wider infrastructure.
Establish a policy-driven approach to: Continuously monitor infrastructure performance Enforce the changes necessary to comply with IT, operational, and business policies Instead of introducing manual automation scripts every time a change is needed, SDI can automatically identify the requirements and issues appropriate commands to infrastructure components. Open-source driven Open source technologies remove the barriers that prevent elastic and flexible operations of the infrastructure.
Benefits of SDI for the enterprise Software Defined Infrastructure allows organizations to control how IT workloads are distributed and optimized to maximize the value potential of infrastructure deployments. Download e-book.
0コメント