Programme Structure for 2026/2027
| Curricular Courses | Credits | |
|---|---|---|
| 1st Year | ||
|
Mathematical Foundations for Deep Learning
6.0 ECTS
|
2nd Cycle > Mandatory Courses | 6.0 |
|
Cognition & Emotion
6.0 ECTS
|
2nd Cycle > Mandatory Courses | 6.0 |
|
Mathematical Methods in Machine Learning
6.0 ECTS
|
2nd Cycle > Mandatory Courses | 6.0 |
|
Introduction to Machine Learning
6.0 ECTS
|
2nd Cycle > Mandatory Courses | 6.0 |
|
Applied Artificial Intelligence Project
6.0 ECTS
|
2nd Cycle > Mandatory Courses | 6.0 |
|
Societal Artificial Intelligence
6.0 ECTS
|
2nd Cycle > Mandatory Courses | 6.0 |
|
Computational Optimization
6.0 ECTS
|
2nd Cycle > Mandatory Courses | 6.0 |
|
Knowledge and Reasoning in Artificial Intelligence
6.0 ECTS
|
2nd Cycle > Mandatory Courses | 6.0 |
| 2nd Year | ||
|
Advanced Machine Learning
6.0 ECTS
|
2nd Cycle > Mandatory Courses | 6.0 |
|
Master Project in Artificial Intelligence
42.0 ECTS
|
Final Work | 42.0 |
|
Master Dissertation in Artificial Intelligence
42.0 ECTS
|
Final Work | 42.0 |
Mathematical Foundations for Deep Learning
LO1: Recognize the various components of a deep learning model with neural networks.
LO2: Implement stand-alone programs, using python, of simplified versions of the components implicit in LG1.
LO3. Relate different architectures to solve different problems.
LO4. Apply regularization techniques to improve the performance of deep learning models.
LO5. Know, in detail, and apply the DQN algorithm in the context of reinforcement learning.
LO6. Use the Keras library to implement deep learning models to solve problems in image recognition, natural language processing and reinforcement learning.
LO7. Know fundamental theorems about asymptotic neural networks and apply these results in the critical analysis of DL models
1. Linear regression.
2. Linear classifiers.
3. Dense neural networks and backprop.
4. Universal approximation theorems and asymptotic behavior.
5. Automatic differentiation.
6. Introduction to TensorFlow and Keras.
7. Regularization techniques.
8. Convolution networks.
9. Data augmentation and fine-tuning.
10. Introduction to deep reinforcement learning.
Students must choose between two assessment models:
Model 1 (Regular Assessment):
Consists of two individual, in-person practical assessment moments (each worth 50%). In these assessments, students will be asked to adapt and modify the code developed during class, complemented by their own independent work. This independent preparation is organized through weekly worksheets.
Model 2 (Grade Improvement):
If students are not satisfied with their results from Model 1, they may complete a group project (with up to 4 members), which will account for 30% of the final grade. The remaining 70% will be based on the results from the Regular Assessment.
Note: All assessment moments may be subject to an oral discussion.
- François Chollet, “Deep Learning with Python”, Manning, Second Edition 2021.
- Maxim Lapan, “Deep Reinforcement Learning Hands On”, Packt, Second Edition 2020.
- Ian Goodfellow and Yoshua Bengio, and Aaron Courville, "Deep Learning", MIT press, 2016
Cognition & Emotion
By the end of this course, the student should be able to:
1. Know the origins of the emotion-cognition debate
2. Know the main theoretical perspectives on the relationship between emotion and cognition
3. Know, analyse and evaluate the main methods and research techniques on the influence of emotions on cognition, and on the cognitive factors involved on emotions.
4. Know and explain how emotions, cognitive and socio-cognitive processes influence each other and are integrated
5. Understand the practical implications of this field, being able to apply the acquired knowledge to a range of contexts
1. Definitions and assumptions regarding the cognition-emotion debate
1.1. Cognition: notions of cognitive representation and of information processing; overview of cognitive functions
1.2. Emotion and related concepts
1.3. Historical and philosophical aspects of the cognition-emotion debate
1.4. Methodological approaches to the study of cognition-emotion interactions
2. The influence of emotion on cognition
2.1. Judgment, decision-making and processing modes
2.2. Attention and cognitive control
2.3. Memory
2.4. Language
2.5. Emotional traits and cognitive performance
2.6. Affective disorders
3. The influence of cognition on emotion
3.1. Cognitive processes and emotion regulation
3.2. Impact of cognitive disorders on emotion
4. Integration of cognitive and emotional processes: interactions in the brain
Assessment throughout the semester: Group assignment, including the discussion of a paper and a written report on the topic (40%); test (60%). Students will get approval if they achieve at least 9.5 points in each of the evaluation elements.
Students who fail or miss evaluation throughout the semester are eligible for the exam (100%). They will be approved if their grade is >= 9.5.
Eysenck, M. W., & Keane, M. T. (2020)Cognitive psychology: A student’s handbook (8th Edition). Routledge: Taylor & Francis Group
Storbeck, J., & Clore, G. (2007). On the interdependence of cognition and emotion. Cognition and Emotion, 21, 1212-1237
Power, M. J., Dagleish, T. (Eds., 2016). Handbook of cognition and emotion: From order to disorder (Third Edition). London: Psychology Press
Pessoa, L. (2008). On the relationship between emotion and cognition. Nature Reviews Neuroscience, 9, 148-158.
Okon-Singer, H., Hendler, T., Pessoa, L., & Shackman, A. (2015). The neurobiology of emotion?cognition interactions: fundamental questions and strategies for future research. Frontiers in Human Neuroscience, 9, 58.
Ochsner, K. N., & Phelps, E. (2007). Emerging perspectives on emotion-cognition interactions. Trends in Cognitive Sciences,11(8), 317-318.
Duncan, S., & Barrett, L. F. (2007). Affect is a form of cognition: A neurobiological analysis. Cognition and Emotion, 21, 1184-1211.
·
Mathematical Methods in Machine Learning
LO1. Know fundamental concepts of linear algebra, probability and information theory.
LO2. Apply the previous techniques (LO1) in the context of machine learning, using, in particular, Kernel methods and Gaussian processes in regression and classification problems.
LO3. Master basic Fourier and wavelet analysis techniques.
LO4. Employ the previous methods (LO1 and LO3) in the context of signal and image processing.
LO5. Apply basic dynamic programming algorithms to solve problems using reinforcement learning.
LO6. Implement the aforementioned techniques in Python.
I - Linear algebra, probabilities and information.
1. Inner products and matrix decomposition.
2. Principal components analysis.
3. Random variables, information and entropy.
4. Kernel methods and Gaussian processes in regression and classification.
II - Signal processing.
1. Discrete Fourier transform, FFT and convolution.
2. Discrete Wavelets.
3. Applications to sound and image processing.
III - Dynamic programming and reinforcement learning.
1. The context of reinforcement learning.
2. Bellman’s equation.
3. Iterative methods and applications.
4. Monte Carlo methods.
The assessment in this course will follow exclusively the modality of assessment throughout the semester, due to its strong practical component. A minimum attendance of 50% of classes is required.
Assessment will consist of 2 worksheets, 2 projects, and a final written test.
Each worksheet is worth 10% of the final grade (together worth 20%), and the worksheets are completed individually in writing.
Each project is worth 15% of the final grade (together worth 30%), and must be developed and implemented in Python. Projects are preferably completed in groups of three and will be subject to a final discussion.
The final written test (worth 50%) coincides with the first and/or second exam periods and covers all course material, excluding any Python implementation. To obtain approval at this course a minimum grade of 8.0 points (out of 20) is required in the final written test.
The special exam period is reserved for the cases provided for in Article 14 of the General Regulation for the Assessment of Knowledge and Skills (RGACC).
1. Ian Goodfellow, Yoshua Bengio e Aaron Courville, "Deep learning", MIT press, 2016.
2. Christopher M. Bishop, "Pattern Recognition and Machine Learning", Springer, 2006.
3. Richard S. Sutton and Andrew G. Barto, "Reinforcement Learning: an introduction", MIT press, 2nd edition, 2018.
4. Aston Zhang, Zachary C. Lipton, Mu Li and Alexander J. Smola, "Dive into Deep Learning", 1st Edition, Cambridge University Press, 2024.
1. Francois Chollet, "Deep Learning with Python", Second Edition, Manning Publications Co., 2021.
2. Steven L. Brunton and J. Nathan Kutz, “Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control”, Cambridge University Press, 1st edition, 2019.
Introduction to Machine Learning
At the end of the course the student should be able to:
OA1. Identify the main historical milestones of ML.
OA2. Know its relations with other scientific areas.
OA3. Enumerate and recognize some of its applications.
OA4. Know the characteristics of the main algorithms in the field of Machine Learning.
OA5. Know, and be able to explain, the main concepts of an algorithm that exemplifies: Supervised Learning (symbolic and sub-symbolic), Unsupervised Learning, Reinforcement Learning and Search Algorithms.
OA6. Explain in full detail one of the learning algorithms studied.
OA7. Implement a learning algorithm or use one in a non-trivial problem.
CP1. Historical notes on Machine Learning. Relationship with other displines. Applications.
CP2. Machine Learning problems and approaches;
CP3. Unsupervised Learning;
CP4. Supervised Learning (symbolic and sub-symbolic);
CP5. Reinforcement Learning;
CP6. Search methods and Genetic/Evolutionary Algorithms;
CP7. Data pre-processing, results validation;
CP8. Speedup of ML algorithms;
CP9. ML algorithm implementation.
It is only possible to pass this course by assessment throughout the semester, this course has no exam.
Assessment elements:
- 4 practical exercises, code and report, in groups of 2 (10% each) during the academic term with face-to-face discussion;
- 1 test (20%), during the academic term;
- group project (40%) which includes a report, code and oral presentation
The assessment in the special season will consist of the project carried out individually and, as in the other terms, to be handed in one week before the special term and a written test which replaces the component of the practical exercises and mini-tests. The weights of these assessment elements are the same as those indicated above.
Attendance is not used as an evaluation or failure criterion.
Ethem Alpaydin, Introduction to Machine Learning, Fourth Edition, 2020, https://mitpress.mit.edu/9780262043793/introduction-to-machine-learning/
- Tom Mitchell, Machine Learning, 1997, http://www.cs.cmu.edu/~tom/mlbook.html
- Simon Haykin, Neural Networks and Learning Machines, Third Edition, 2009, https://cours.etsmtl.ca/sys843/REFS/Books/ebook_Haykin09.pdf
- R. Duda and P. Hart, Pattern Classification and Scene Analysis., 1973, https://www.amazon.com/Pattern-Classification-Scene-Analysis-Richard/dp/0471223611
Applied Artificial Intelligence Project
The student, upon completion of the UC, should:
OA1: Know a process for organizing AI projects
OA2: Know how to write a scientific paper based on the results of an experiment
OA3: Have gained experience of the tasks usually involved in an AI project OA4: Have contact with the problems inherent to the use of realistic data
P1. CRISP-DM
P2. Introduction to scientific paper writing
P3. Introduction to project organization and management
P4. IA project practice
Assessment throughout the semester.
Presentation of a group project at the end of the term (50%) and monitoring of the project and interim presentations assessed individually (50%).
If the student is entitled to EEF, they can submit an individual project at that time with a discussion. This subject does not have a final exam.
F. Martínez-Plumed et al., "CRISP-DM Twenty Years Later: From Data Mining Processes to Data Science Trajectories," in IEEE Transactions on Knowledge and Data Engineering, vol. 33, no. 8, pp. 3048-3061, 1 Aug. 2021, doi: 10.1109/TKDE.2019.2962680. keywords: {Data mining;Data science;Data models;Trajectory;Business;Knowledge discovery;Standards;Data science trajectories;data mining;knowledge discovery process;data-driven methodologies}
Wortman-Wunder, Emily, & Kate Kiefer. (1998). Writing the Scientific Paper. Writing@CSU. Colorado State University. https://writing.colostate.edu/resources/writing/guides/, consulted 25-jun-2024.
Societal Artificial Intelligence
LO1 Apply relevant standards, ethical, social, privacy, and governance considerations, and thus demonstrate an understanding of issues related to the practice of an AI professional.
LO2 Analyse and discuss social impact and professional issues related to AI and deployment of AI systems, evaluating the implications of delegating control and decision making to in intelligent systems, including issues on fairness, bias, transparency, accountability and explainability of AI.
LO3 Analyse and evaluate case studies, and to assess the work of peers.
LO4 Communicate effectively to a variety of audiences through a range of modes and media, specifically, through written technical reports and visual and oral presentations.
LO5 Apply responsible and ethical research principles and choose appropriate methods to analyse, theorise and justify conclusions in AI professional practice and research.
S1 Introduction: a brief history of AI
S2 How AI works
S3 Bias, Ethics, Fairness, and Privacy, Robustness; Thrustworthiness;
S4 Regulatory frameworks
S5 AI for the present: case studies
S6 The interpretability challenge: XAI
S7 AI for the future: special topics in AI
This course is mainly concerned with cutting-edge concepts and methods in a state of the art that is still evolving, it does not require a 100 per cent written exam.
Students are assessed using the group work in the workshops (GW), by reading a conference/journal article and presenting it to the peers at the end of the semester (RP), and writing an individual research paper (IR) to be delivered ar at specific date during the 1st examination period. The grade is calculated by: 0,3 x GW + 0,3 x RP + 0,4 x IR.
Students are assessed through:
(a) group work in workshops (GW),
(b) presenting a conference or journal paper (RP) to peers (at the end of the semester)
(c) an individual research paper (IR) (on a set date in the 1st term),
according to the formula: 0.3 x GW + 0.3 x RP + 0.4 x RI.
For approval, any assessment element requires a minimum mark of 8.
S. Russell, P. Norvig. Artificial Intelligence: A Modern Approach. Pearson, Upper Saddle River, NJ, 2009.
A. Holzinger, P. Kieseberg, E. Weippl & A Min Tjoa. Current Advances, Trends and Challenges of Machine Learning and Knowledge Extraction: From Machine Learning to Explainable AI. Springer Lecture Notes in Computer Science LNCS 11015. Cham: Springer, pp. 1-8, 2018.
C. Molnar. Interpretable machine learning. A Guide for Making Black Box Models Explainable, 2019. https://christophm.github.io/interpretable-ml-book/.
T. Miller. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38, 2019.
ALLEA The European Code of Conduct for Research Integrity, European Union, URL: https://allea.org/code-of-conduct/
Artificial intelligence act,
URL: https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf
Material de leitura a determinar ao longo das aulas e de acordo com os temas a tratar, nomeadamente: artigos em revistas científicas/artigos de opinião/livros para discussão em aula.
Computational Optimization
At the end of the course, the student should be able to:
LG1 - Use linear and quadratic optimization methods.
LG2 - Apply numerical methods of classical optimization.
LG3 - Understand, apply and adapt the most important gradient descent methods.
LG4 - Understand, apply and adapt some metaheuristics.
PC1-Introduction and review
1.1- Introduction
1.2 - Linear and quadratic programming
1.3 - Differential Calculus, Unrestricted and Restricted Optimization.
1.4 - Numerical methods
PC2 - Gradient descent and variations
2.1 - Gradient Descent
2.2 - Stochastic Gradient Descent
2.3 - Momentum
2.4 - Adagrad
2.5 - Others
PC3 - Metaheurísticas
3.1- Simulated Annealing
3.2 - Tabu Search
3.3 - Evolutionary Algorithms
The modes of assessment are:
1) Assessment throughout the semester:
- 1 mini-test (15%), completed individually during the semester;
- 1 research project (45%) on one or more topics of the course unit, carried out in groups of 2 or 3 students (including a report, code, and oral presentation during the semester);
- 1 individual written test (40%), to be taken during either the 1st or 2nd exam period.
Approval through continuous assessment requires a minimum grade of 8.0 out of 20 in all components.
2) Assessment by Examination: written exam (100%), to be taken during either the 1st or 2nd exam period.
The minimum passing grade for this course is 9.50 points (out of 20), rounded to a final grade of 10 points.
Sra, Suvrit, Sebastian Nowozin, and Stephen J. Wright, eds., Optimization for machine learning, Mit Press, 2012.
Dréo, Johann, et al., Metaheuristics for hard optimization: methods and case studies, Springer Science \& Business Media, 2006.
Dive into DeepLearning, Chapter 11, Optimization algorithms, https://d2l.ai/chapter_optimization/
Postek, Krzysztof and Zocca, Alessandro and Gromicho, Joaquim and Kantor, Jeffrey, Hands-On Mathematical Optimization with AMPL in Python, 2024. https://ampl.com/mo-book
Knowledge and Reasoning in Artificial Intelligence
The learning outcomes follow establisehd international standards and are the following:
LO1. Know several of the existing knowledge and reasoning systems.
LO2. Understand how to represent knowledge and to reason for each system.
LO3. To grasp the advantages and shortcomes of each system.
LO4. Be able to choose the proper system when confronted with a given problem.
LO5. Know how to build knowledge bases for each system.
LO6. Be able to represent and solve real problems involving knowledge representation and reasoning by using diverse systems.
The syllabus is as follows:
PC1: Object-based representation
PC2: Structured Descriptions
PC3: Ontologies and Knowledge Domain Representation
PC4: Knowledge Representation in a Social Context (Semantic Web)
PC5: Logic Programming
PC6: Non-monotonic Logic and ASP-Answer Set Programming
PC7: Uncertainty and Degrees of Belief
PC8: Abductive Reasoning
PC9: Qualitative Reasoning
PC10: Constraint Satisfaction
PC11: Representation and Reasoning by Actions and Plans
PC12: Abstraction, Reformulation and Approximation
In the assessment throughout the semester, students will have to take:
- Individual written test on the entire course syllabus (60%) - to be taken in the exam season (1st or 2nd exam).
- Research work (in groups) on one of the UC topics, with a report and oral presentation (40%). The oral presentation is made in class time during the semester. The grade for the research paper is divided 50% between each component and group members can have different grades.
Both assessment components in the semester-long assessment have a minimum mark of 8.
Alternatively, students can take only one exam (100%), which can be on both exam dates.
In the special exam period, students take the exam (100%).
--
Advanced Machine Learning
OA1: Understand the main neural network architectures for processing sequential data
OA2: Apply simplified versions of some of the sequential data processing architectures to concrete problems
OA3: Describe the architecture of transformer-based and self-attention models, such as BERT, GPT-2 and GPT-3, as well as variants of these models
OA4: Apply pre-trained transformer-based models to case studies, making use of transfer learning
OA5: Describe the operation of generative models, such as Generative Adversarial Networks, Variational Autoencoders and Flow-based models, and autoregressive models
OA6: Know current trends in applying language models to real problems
Introduction and fundamental concepts revision
P1: Revisiting Neural Networks
- Feed Forward networks and backpropagation
- Regularization Techniques: Dropout, weight decay, early stopping
- Hyperparameter Tuning: Grid search, random search, Bayesian optimization
P2: Sequential data
- Recurrent Neural Networks
- Conditional sequence models
- LSTMs
- CNNs for sequential data
- Attention Mechanisms in sequential models
P3: Transformers
- Transformer architecture and attention
- Processing Natural Language
- Transformer Language Models
- Multimodal Transformers
- Applications, dialog systems, recent trends
P4: Generative Modeling
- Generative Adversarial Networks (GANs)
- Variational autoencoders (VAE)
- Autoregressive models
Assessment throughout the semester: Mini-tests (15%) + Project (35%) + Final Test (50%), held during the 1st exam period (simultaneously with the exam). A minimum grade of 8 (out of 20) is required in each assessment component.
Students may take the final exam (1st, 2nd, or special exam periods) if they choose this assessment mode or if they do not achieve a passing grade in the continuous assessment. The final exam consists of an individual test covering all course content.
The final grade for the Project is assigned individually to each student, based on an oral examination, and depends on the code developed, the report presented, and the student’s performance in the oral. Test questions may address aspects related to the Project.
- Deep Learning: Ian Goodfellow, Yoshua Bengio, Aaron Courville 2016 MIT Press
- Deep Learning - Foundations and Concepts, Christopher M. Bishop , Hugh Bishop, Springer 2024, ISBN 978-3-031-45467-7
- Probabilistic Machine Learning: An Introduction, Murphy, Kevin, (2022), https://probml.github.io/pml-book/book1.html
- Eli Stevens, Luca Antiga, Thomas Viehmann (2020) Deep learning in Python/ Pytorch, Manning Publications (Free book)
- Aurélien Géron, “Hands-On Machine Learning with Scikit-Learn and TensorFlow”, O’Reilly, 2017
Master Project in Artificial Intelligence
Master Dissertation in Artificial Intelligence
On completion of the course, students should:
(LO1) Be aware of the typical structure of a dissertation;
(LO2) Know the methodology for researching and presenting a systematic literature review on a topic;
(LO3) Be aware of the need to take a critical view of research results and other tools to support the literature review;
(LO4) Have a clear idea of the consequences of (ineffective) communication and reflect on their own inefficiencies in this area.
(P1) Structuring a dissertation;
(P2) Systematic literature review;
(P3) Presentation of results;
This course has a final assessment by jury, on submission of the thesis.
The assessments made during the semester are only formative.
Slides da disciplina