Seminar

Our seminar

The program for autumn 2022 is available here.

Laboratory seminars (PV273 in the course catalog) on Wednesday, 15:00-16:00, A505, FI MU, Botanická 68

The format of standard lectures: 30-40 minutes presentation + 15 minutes for questions, slides in English, presentation in English or Czech based on audience.

  • 15.2.2023
    Bc. Jiří Papoušek
    Solver for dynamic vehicle routing problems with time windows
    Abstract: Static vehicle routing problem with time windows (VRPTW) is a highly constrained routing problem that has been thoroughly studied, and solvers with good results have been proposed in previous years. Nevertheless, these algorithms are usually based on metaheuristics and/or exact methods. Surprisingly, there are not many successful applications of machine learning in the area of vehicle routing problems in the literature.
    In the presentation, we will describe dynamic VRPTW formulated in the EURO Meets NeurIPS 2022 Vehicle Routing Competition. We will discuss the possibilities and limitations of combining well-established methods with reinforcement learning (specifically deep Q-learning) in order to implement an effective solver for this problem.
  • 22.2.2023
    doc. Ing. RNDr. Barbora Bühnová, Ph.D.
    Trust management in dynamic autonomous ecosystems
    Abstract: Digitalization is leading us toward a future where people, processes, data, and things are not only interacting with each other but might start forming societies on their own. In these dynamic systems enhanced by artificial intelligence, trust management on the level of human-to-machine as well as machine-to-machine interaction becomes an essential ingredient to supervise the safe and secure progress of our digitalized future. In this talk, we will look into the essential elements of trust management in complex digital ecosystems and discuss how trust-building can be leveraged to support people in safe interaction with other autonomous digital agents (e.g., self-driving cars, drones, and other robotic systems), highlighting our research on establishing trust among autonomous agents.
  • 1.3.2023
    Ing. Eva Výtvarová
    Predicting responsiveness to deep brain stimulation in Parkinson’s disease
    Abstract: Parkinson’s disease (PD) typically occurs in people over the age of 60; however, it can be diagnosed even in 40-year-old ones. It is a neurodegenerative disorder with the first symptoms targeting motion quality — tremor, rigidity, slowness of movement, and difficulty walking. Cognitive and behavioral problems such as depression, anxiety, and apathy soon follow. Mostly, PD is treated by medication that aims to reduce symptoms; no cure is known. In severe cases of PD, microelectrodes are surgically placed inside the brain to help to reduce motor symptoms. Since it is an invasive procedure, the patient’s responsiveness to the stimulation is essential. We aim to find a pattern of functional brain connectivity that is different in good responders to the stimulation and non-responders.
    In the presentation, I will describe the dataset and walk listeners through individual analysis steps. They include evaluating the effect of deep brain stimulation (DBS) on connectivity, estimating patterns of connectivity related to an improvement of motion after DBS, and detecting patterns related to cognition. These three combined will provide an understanding of parts of the brain influenced by DBS and offer hypotheses for the prediction of the suitability of DBS for a specific patient.
  • 8.3.2023
    Mgr. Martin Brakl
    UDP packet reflector in P4 for Tofino Native Architecture
    Abstract: Computer networks are mostly driven by networking devices with fixed-function hardware. That leads to many problems. The biggest one is the slow development of new networking applications and their nonexisting direct support in the hardware of networking devices. Software-defined networks or SDNs have the goal to shatter this long-lasting era of legacy networking and fix issues that are slowing down the development of computer networks.
    In this talk, we will look at the newest generation of SDNs driven by the P4 programming language and ASIC chip Tofino developed just for P4. More specifically, I will show the basic architecture the language and the ASIC chip and how are they cooperating. The second part of the talk will focus on the experimental implementation of the UDP packet reflector in P4 for Tofino utilizing virtual protocol-independent multicast.
  • 15.3.2023
    Spring holidays
  • 22.3.2023
    RNDr. Petra Němcová, RNDr. Jiří Filipovič, Ph.D.
    Caverdock (2.0.?): Flexible receptors and robotic motion planning
    Abstract: The receptor-ligand interactions are an important part of many biologically relevant processes. The small ligand molecule needs to pass via a tunnel into a receptor before the interaction of interest begins.Both receptor-ligand interaction and ligand pathway need to be studied.CaverDock is a computational tool that simulates the transport of the ligand in a tunnel. CaverDock 1.0. works with one receptor’s conformation.
    Since then, we have made two improvements. First, CaverDock can work with multiple receptors and, therefore, can better simulate natural receptor behavior. Second, CaverDock has expanded to use robotic algorithms that can work better in the open parts of the tunnel. We will demonstrate the use of both new methods on real data.
  • 29.3.2023
    Mgr. Pavel Novák
    Unraveling the security patterns in complex networks: A modelling perspective
    Abstract: The increasing dependence on interconnected systems has exposed organizations and societies to various security threats. Securing complex networks requires a systematic approach that can identify and mitigate potential vulnerabilities. This presentation explores modeling approaches to analyze and develop security patterns in complex networks. We will discuss how to represent complex networks using models, simulate security threats, and recommend optimal security strategies.
  • 5.4.2023
    Mgr. Aleš Křenek, Ph.D.
    Quick, easy, and dirty guide to finding the best hyperparameters of a neural network
    The performance of neural network models strongly depends on the good setting of their hyperparameters (e.g., number and size of layers, activation functions, learning rate, etc.). There are textbook guides to handle textbook examples, however, the more complex network architecture, the less intuitive the process, resulting in the need for state space search in most practical instances. We demonstrate the problem with a realistic yet simple image generation example and show how to approach the tuning with the Keras Tuner library.
    The search space and computational requirements grow proportionally with the growing number of hyperparameters. On the other hand, the used search algorithms are fairly easy to run in parallel, and Keras Tuner supports it directly. We show how to exploit the parallelism using CERIT-SC Jupyter/Kubernetes compute resources.
    Finally, certain homogeneous subsets of hyperparameter configurations can be fused into a single, bigger model, having the potential to use GPU acceleration more effectively. We will briefly discuss this approach and its tradeoffs.
  • 12.4.2023
    Ing. Václav Oujezský, Ph.D.
    TBA
  • 19.4.2023
    doc. RNDr. Tomáš Brázdil, Ph.D.
    TBA
  • 26.4.2023
    Canceled: CESNET seminar
  • 3.5.2023
    RNDr. Pavel Troubil, Ph.D.
    TBA
  • 10.5.2023
    Future Sitola

Past seminars

Contact: Hana Rudová

(c) 2021 SITOLA, Administration