~~NOTOC~~
=====Dr.rer.nat. Daniel Nyga======
| {{:wiki:daniel_nyga.jpg?0x180}} ||||
|::: ||Postdoctoral Researcher\\ \\ ||
|:::|Room:
|:::|Tel: |--49 -421 218 64010|
|:::|Fax: |--49 -421 218 64047|
|:::|Mail: |nyga@cs.uni-bremen.de|
|:::| ||
====About====
Daniel Nyga is a postdoctoral researcher at the Institute for Artificial Intelligence (IAI), University of Bremen. He holds a Bachelor and Master's degree in computer science from the Technical University of Munich (TUM), with a major in AI and Machine Learning, as well as a doctor's degree (summa cum laude) in computational science from the University of Bremen for his thesis on the [[http://nbn-resolving.de/urn:nbn:de:gbv:46-00105882-13|Interpretation of Natural-language Robot Instructions: Probabilistic Knowledge Representation, Learning, and Reasoning]] (see below). He was a visiting scholar in the Bio-intelligence Laboratory headed by Prof. Byoung-Tak Zhang at Seoul National University (SNU), South Korea, and in the Robust Robotics Group headed by Prof. Nicholas Roy at the Computer Science and AI Laboratory of MIT, USA.
====Dissertation====
[[http://nbn-resolving.de/urn:nbn:de:gbv:46-00105882-13|{{:team:cover.png?200 |}}]]//Abstract//-- A robot that can be simply told in natural language what to do -- this
has been one of the ultimate long-standing goals in both Artificial
Intelligence and Robotics research. In near-future applications,
robotic assistants and companions will have to understand and perform
commands such as "set the table for dinner", "make pancakes for
breakfast", or "cut the pizza into 8 pieces." Although such
instructions are only vaguely formulated, complex sequences of
sophisticated and accurate manipulation activities need to be carried
out in order to accomplish the respective tasks. The acquisition of
knowledge about how to perform these activities from huge collections
of natural-language instructions from the Internet has garnered a lot
of attention within the last decade. However, natural language is
typically massively unspecific, incomplete, ambiguous and vague and
thus requires powerful means for interpretation.
This work presents PRAC -- Probabilistic Action Cores -- an
interpreter for natural-language instructions which is able to resolve
vagueness and ambiguity in natural language and infer missing
information pieces that are required to render an instruction
executable by a robot. To this end, PRAC formulates the problem of
instruction interpretation as a reasoning problem in first-order
probabilistic knowledge bases. In particular, the system uses Markov
logic networks as a carrier formalism for encoding uncertain knowledge.
A novel framework for reasoning about unmodeled symbolic concepts is
introduced, which incorporates ontological knowledge from taxonomies
and exploits semantically similar relational structures in a domain of
discourse. The resulting reasoning framework thus enables more compact
representations of knowledge and exhibits strong generalization
performance when being learnt from very sparse data. Furthermore, a
novel approach for completing directives is presented, which applies
semantic analogical reasoning to transfer knowledge collected from
thousands of natural-language instruction sheets to new situations. In
addition, a cohesive processing pipeline is described that transforms
vague and incomplete task formulations into sequences of formally
specified robot plans. The system is connected to a plan executive that
is able to execute the computed plans in a simulator. Experiments
conducted in a publicly accessible, browser-based web interface
showcase that PRAC is capable of closing the loop from
natural-language instructions to their execution by a robot.
====Master's Thesis====
[[https://ai.uni-bremen.de/_media/team/ma_nyga_small.pdf|{{:team:ma-nyga-cover.png?180 |}}]]//Abstract//-- This thesis investigates boosting algorithms for classifier learning in the presence of imbalanced classes and uneven misclassification costs. In particular, we address the well-known AdaBoost procedure and its extensions for coping with class imbalance, which typically has a negative impact on the classification accuracy regarding the minority class. We give an extensive survey of existing boosting methods for classification and enhancements for tackling the class imbalance problem, including cost-sensitive variants. Regularized boosting methods, which are favourable when dealing with noise and overlapping class distributions, are considered too. We theoretically analyze several strategies for introducing costs and their applicability in the case of imbalance. For one variant (AdaC1) we show that it is instable under certain conditions. We identify drawbacks of an often-cited cost-sensitive boosting algorithm (AdaCost), both theoretically and empirically. We also expose that an algorithm for tackling imbalance without using explicit costs (RareBoost) is a special case of the RealBoost algorithm, a probabilistic variant of AdaBoost. We approve our findings by empirical evaluation on several real-world data sets and academic benchmarks.
====Projects====
Daniel Nyga's research interests revolve around topics on Artificial Intelligence and Data Science in general, as well as Machine Learning, Data Mining and Pattern Recognition techniques. In particular, he is interested in probabilistic graphical and relational knowledge representation, learning and inference methods, and in applications thereof in natural-language understanding, knowledge processing and robotics.
He was involved in the European FP7 research projects [[http://www.robohow.org|RoboHow]] and [[http://www.acat-project.eu|ACAT]].
He is the lead developer in the projects [[http://www.pracmln.org|pracmln]], [[http://www.actioncores.org/|PRAC]] and [[http://www.pyrap.org|pyrap]].
His GitHub profile can be found [[http://www.github.com/danielnyga|here]].
====Fields of Interest====
* Artificial Intelligence
* Probability Theory
* Probabilistic Knowledge Processing
* Machine Learning
* Statistical Relational Learning
* Data Mining/Knowledge Discovery
* Automated Learning/Understanding of WWW information
* Natural-language understanding
====Teaching====
* AI: Knowledge Acquisition and Representation ([[https://ai.uni-bremen.de/teaching/le-ki2-ws20|WS2020/21]]) (Lecturer)
* Foundations of Artificial Intelligence ([[https://ai.uni-bremen.de/teaching/le-ki1_ss20|SS2020]]) (Lecturer)
* AI: Knowledge Acquisition and Representation ([[https://ai.uni-bremen.de/teaching/le-ki2-ws19|WS2019/20]]) (Lecturer)
* Foundations of Artificial Intelligence ([[https://ai.uni-bremen.de/teaching/le-ki1_ss19|SS2019]]) (Lecturer)
* AI: Knowledge Acquisition and Representation ([[https://ai.uni-bremen.de/teaching/le-ki2-ws18|WS2018/19]]) (Lecturer)
* Foundations of Artificial Intelligence ([[https://ai.uni-bremen.de/teaching/le-ki1_ss18|SS2018]]) (Lecturer)
* AI: Knowledge Acquisition and Representation ([[https://ai.uni-bremen.de/teaching/le-ki2-ws17|WS2017/18]]) (Lecturer)
* Master Seminar: Data Mining and Data Analytics ([[http://ai.uni-bremen.de/teaching/datamining_ss17|SS2017]])
* AI: Knowledge Acquisition and Representation ([[https://ai.uni-bremen.de/teaching/le-ki2-ws16|WS2016/17]]) (Lecturer)
* AI: Knowledge Acquisition and Representation ([[https://ai.uni-bremen.de/teaching/le-ki2-ws15|WS2015/16]]) (Lecturer)
* Foundations of Artificial Intelligence ([[https://ai.uni-bremen.de/teaching/le-ai-ss15|SS2015]]) (Tutorial/Co-Lecturer)
* AI: Knowledge Acquisition and Representation ([[https://ai.uni-bremen.de/teaching/le-ki2-ws14|WS2014/15]]) (Lecturer)
* Foundations of Artificial Intelligence ([[https://ai.uni-bremen.de/teaching/kiss2014|SS2014]]) (Tutorial)
* AI: Knowledge Acquisition and Representation ([[https://ai.uni-bremen.de/teaching/ki2-2013|WS2013/14]]) (Co-Lecturer)
* Foundations of Artificial Intelligence ([[https://ai.uni-bremen.de/teaching/kiss2013|SS2013]]) (Tutorial)
* Technical Cognitive Systems (Lecture & Tutorial, at TUM) ([[https://ias.cs.tum.edu/teaching/ss2012/techcogsys|SS2012]])
* Techniques in Artificial Intelligence (Tutorial, at TUM) ([[https://ias.cs.tum.edu/teaching/ws2011/240927786|WS2011/12]])
* Discrete Probability Theory (Tutorial, at TUM) ([[http://www14.in.tum.de/lehre/2011SS/dwt/|SS2011]])
====Supervised Theses====
* Lifelong Learning of First-order Probabilistic Models for Everyday Robot Manipulation (Master's Thesis, Marc Niehaus)
* Scaling Probabilistic Completion of Robot Instructions through Semantic Information Retrieval (Master's Thesis, Sebastian Koralewski)
* To see what no robot has seen before - Recognizing objects based on natural-language descriptions (Master's Thesis, Mareike Picklum)
* Web-enabled Learning of Models for Word Sense Disambiguation (Bachelor Thesis, Stephan Epping)
* Grounding Words to Objects: A Joint Model for Co-reference and Entity Resolution Using Markov Logic Networks for Robot Instruction Processing (Diploma Thesis, Florian Meyer)
====== Publications ======
bibfiles/allpublications.bib
nyga