+55 31 3409 5860

Bruno Castro da Silva

Bruno Castro da Silva

Universidade Federal do Rio Grande do Sul
Departamento de Informática Aplicada

Collaborating Researcher

Information extracted from Lattes platform

Last update: 2020/05/12


Ph.D. Computer Science na University of Massachusetts Amherst em 2014
M.Sc. Computação na Universidade Federal do Rio Grande do Sul em 2007
B.Sc. Ciência da Computação na Universidade Federal do Rio Grande do Sul em 2004

Current projects

2017 a AtualRio Grande do Sul and Oslo Collaboration on Artificial Intelligence and Robotics (ROCAIR)
Technology is transforming our society and the way in which we work and interact. We are surrounded by computer systems and robots that have recently become more intelligent and able to adapt to our needs and preferences. Thanks to progress in machine learning and artificial intelligence, we now see a major transition---from ourselves having adapt to technology and services, to the latter systems now adapting to us. While we see a rapid progress in algorithms and robotic systems, it is challenging for a single research group to provide up-to-date graduate courses while undertaking state-of-the-research. This project focuses on forming a new international collaboration between leading groups in these fields, both in Norway and in Brazil, and has as main objective to strengthen education in such institutions in self-learning systems and robotics. This will be achieved through a set of different activities such as short-term reciprocal mobility stays by staff (for research collaboration), curriculum and teaching material development, and by jointly organizing intensive courses/workshops. In addition, the project also allows for long-term student exchange between Norway and Brazil. (Project number UTF-2016-CAPES-SIU/10007)
Integrantes: Jim Torresen (coordenador), Bruno Castro da Silva, Dante Barone, Kai Olav Ellefsen, Kyrre Glette, Mariana Luderitz Kolberg, Edson Prestes e Silva Júnior.
2016 a AtualGOAL-Robots
Funding Institution: European Commission, Project number: 713010. This project aims to develop a new paradigm to build open-ended learning robots called "Goal-based Open-ended Autonomous Learning" (GOAL). GOAL rests upon two key insights. First, to exhibit an autonomous open-ended learning process, robots should be able to self-generate goals, and hence tasks to practice. Second, new learning algorithms can leverage self-generated goals to dramatically accelerate skill learning. The new paradigm will allow robots to acquire a large repertoire of flexible skills in conditions unforeseeable at design time with little human intervention, and then to exploit these skills to efficiently solve new user-defined tasks with no/little additional learning. This innovation will be essential in the design of future service robots addressing pressing societal needs. The project will develop the GOAL paradigm by pursuing three main objectives: (1) advance our understanding of how goals are formed and underlie skill learning in children; (2) develop innovative computational architectures and algorithms supporting (2a) the self-generation of useful goals based on user/task independent mechanisms such as intrinsic motivations, and (2b) the use of such goals to efficiently and autonomously build large repertoires of skills; (3) demonstrate the potential of GOAL with a series of increasingly challenging demonstrators in which robots will autonomously develop complex skills and use them to solve difficult challenges in real-life scenarios. The interdisciplinary project consortium is formed by leading international roboticists, computational modelers, and developmental psychologists working with complementary approaches. This will allow the project to greatly advance our understanding of the fundamental principles of open-ended learning and to produce a breakthrough in the field of autonomous robotics by producing for the first time robots that can autonomously accumulate complex skills and knowledge in a truly open-ended way.
Integrantes: Gianluca Baldassarre (coordenador), Bruno Castro da Silva, Andrew G. Barto, Jochen Triesch, Kevin O'Regan, Jan Peters.
2014 a AtualOn-line Model-learning Policy Search
Bolsa de Atração de Jovens Talentos (Wouter Caarls) Proposta: 151403 Processo: 88881.030341/2013-01. Reinforcement learning is a powerful method for learning control policies in a variety of applications such as robotics, scheduling, and traffic and network congestion control. Because the environments in which such systems must work are forever changing, it is infeasible to pre-program a solution that works in all cases. Through trial and error, a reinforcement learning agent optimizes a control policy for the desired task without prior knowledge of the environment. However, especially in robotics its applicability has thus far been limited by long learning times. In this project, we aim to develop fully on-line model-learning policy search techniques, thereby combining the low number of trials of model-learning policy search with the short computation time of on-line methods. We have recently developed such an on-line model- learning method in the context of value-based reinforcement learning, where we achieved a speedup of two orders of magnitude over standard on-line techniques. An effective combination of the two should allow efficient learning of complex control policies for systems with many state variables.
Integrantes: Daniel Sadoc Menasche (coordenador), Bruno Castro da Silva, Wouter Caarls, Bruno Campos.

Current applied research projects

See all projects in Lattes

Recent publications

Articles in journals

Preventing undesirable behavior of intelligent machines
2019. SCIENCE.
A task-and-technique centered survey on visual analytics for deep learning model engineering
Analysing the impact of travel information for minimising the regret of route choice
Gaussian Processes for Learning and Control: A Tutorial with Examples

Papers in conferences

Parameterized Melody Generation with Autoencoders and Temporally-Consistent Noise

2019. International Conference on New Interfaces for Musical Expression.
Comparing Multi-Armed Bandit Algorithms and Q-learning for Multiagent Action Selection: a Case Study in Route Choice
2018. 2018 International Joint Conference on Neural Networks (IJCNN 2018).
Towards Designing Optimal Reward Functions in Multi-Agent Reinforcement Learning Problems
2018. 2018 International Joint Conference on Neural Networks (IJCNN 2018).
Learning to Minimise Action Regret in Route Choice
2017. International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2017).
Extending a Coupling Metric for Characterization of Traffic Networks: an Application to the Route Choice Problem
2017. 11th Workshop-School on Agents, Environments, and Applications (WESAAC 2017).
Task-Based Behavior Generalization via Manifold Clustering
2017. EEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017).
Determinação de Propriedades Óticas de Pigmentos Queimados a Partir de Redes Neurais Artificiais
2017. 61o Congresso Brasileiro de Cerâmica.
Using Topological Statistics to Bias and Accelerate Route Choice: preliminary findings in artificial and real-world road networks
2016. 9th International Workshop on Agents in Traffic and Transportation (ATT@IJCAI 2016).
Energetic Natural Gradient Descent
2016. 33rd International Conference on Machine Learning (ICML 2016).
Learning Parameterized Motor Skills on a Humanoid Robot
2014. IEEE International Conference on Robotics and Automation (ICRA 2014).
Active Learning of Parameterized Skills
2014. 31st International Conference on Machine Learning (ICML 2014).
Learning Parameterized Skills
2012. 29th International Conference on Machine Learning (ICML 2012).
Dealing with Non-Stationary Environments using Context Detection
2006. 23rd International Conference on Machine Learning (ICML 2006).

Extended abstracts in conferences

A Flexible Approach for Designing Optimal Reward Functions
2017. International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2017).
Context-Based Concurrent Experience Sharing in Multiagent Systems
2017. International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2017).
Biasing the behavior of organizationally adept agents
2013. Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2013).
Distributed constraint propagation for diagnosis of faults in physical processes
2007. 6th International Joint Conference on Autonomous Agents And Multiagent Systems (AAMAS 2007).
Improving Reinforcement Learning with Context Detection
2006. 5th International Joint Conference On Autonomous Agents And Multiagent Systems (AAMAS 2006).

Abstracts in conferences

RL-CD: Dealing with Non-Stationarity in Reinforcement Learning
2006. Proceedings of the 21st Conference on Artificial Intelligence (AAAI 2006).
Implementação de mecanismos de suporte a múltiplos modelos de motorista em um simulador microscópico de tráfego
2003. XV Salão de Iniciação Científica UFRGS.
Implementação de um sistema genérico baseado em agentes cognitivos piagetianos
2002. XIV Salão de Iniciação Científica UFRGS.
O modulor como sistema de design
2001. XIII Salão de Iniciação Científica UFRGS.
Cityzoom - Ambiente integrado de suporte a decisão em planejamento urbano
2001. X Feira de Iniciação Científica UFRGS.

See all publications in Lattes

Current students


Rafael Garcia. Task-Based Behavior Generalization via Manifold Clustering. Início: 2017. Universidade Federal do Rio Grande do Sul (Co orientador)
Julia Naomi Rosenfield Boeir. Determinação de Propriedades Óticas de Pigmentos Queimados a Partir de Redes Neurais Artificiais. Início: 2017. Universidade Federal do Rio Grande do Sul (Co orientador)


See all students in Lattes