Seminars organized by The Doctoral School of Engineering
-
- Speaker: Hans Kellerer
- Dates: Monday 12/20/2010, 2.30 pm.
- Room: Meeting room DIA, 1° piano
- Abstract: We design a fully polynomial-time approximation scheme (FPTAS) for a single machine scheduling problem to minimize the total weighted
earliness and tardiness with respect to a common restrictive due date. Notice for this problem no constant-ratio approximation algorithm
has been known so far. Our approach is based on adopting an FPTAS for a special version of the knapsack problem to minimize a convex quadratic
non-separable function. For the continuous relaxation of such a knapsack problem we give an algorithm of a quadratic time complexity. The running
time of each presented FPTAS is strongly polynomial.
For further information, please contact G. Nicosia (nicosia@dia.uniroma3.it, tel. 06 57333455).
- Materials: -
- Webpage: -
-
- Speaker: Fabrizio Frati
- Dates: Thursday 10/21/2010, 5.30 pm.
- Room: Meeting room DIA, 1° piano
- Abstract: We prove that planar graphs have $O(\log^4 n)$ queue number, thus
improving upon the previous $O(\sqrt n)$ upper bound. Consequently,
planar graphs admit 3D straight-line crossing-free grid drawings in $O(n
\log^c n)$ volume, for some constant $c$, thus improving upon the
previous $O(n^{3/2})$ upper bound.
This is a joint work with Giuseppe Di Battista (Roma Tre University) and
Janos Pach (EPFL Lausanne and Renyi Institute Budapest)
The same presentation will be given at FOCS 2010, 50th Annual IEEE
Symposium on Foundations of Computer Science, Las Vegas, October 23-26, 2010
- Materials: -
- Webpage: -
-
- Speaker: Massimo Rimondini (DIA)
- Dates: Thursday 09/16/2010, 12.00 pm.
- Room: Meeting room DIA, 1° piano
- Abstract: Compliance with the Gao-Rexford conditions is perhaps the most
realistic explanation of Internet routing stability, although BGP is
renowned to be prone to oscillations. Informally, the Gao-Rexford
conditions assume that (i) the
business relationships between Internet Service Providers (ISPs) yield a
hierarchy, (ii) each ISP behaves in a rational way, i.e., it does not
offer transit to other ISPs for free, and (iii) each ISP ranks routes
through customers better than routes through providers and peers.
We show an efficient algorithm that, given a BGP configuration, checks
whether there exists an assignment of peer-peer
and customer-provider relationships that complies with the Gao-Rexford
conditions. Also, we show that preferring routes through peers to those
through providers, although more suitable than the original formulation
of Condition (iii) to describe the business relationships between ISPs,
makes the problem NP-hard.
The above results hold both in the well known theoretical framework used
to model BGP and in a more realistic setting where (i)
local preferences are assigned on a per-neighbor basis and (ii) transit
is allowed from/to specific neighbor pairs. Observe that the latter
setting, where policy complexity only depends on the number of
neighbors, is very close to the way in which operators typically
configure routers.
This is a joint work with Luca Cittadini (Roma Tre University), Giuseppe
Di Battista (Roma Tre University), Thomas Erlebach (University of
Leicester), and Maurizio Patrignani (Roma Tre University)
The same presentation will be given at the 18th IEEE International
Conference on Network Protocols, Kyoto, Japan, Oct. 5 - 8, 2010
- Materials: -
- Webpage: -
-
- Speaker: Denilson Barbosa (University of Alberta, Edmonton)
- Dates: Monday 07/05/2010, h. 3.00 pm.
- Room: N7 DIA
- Abstract: We consider the top-k Approximate Subtree Matching (TASM) problem:
finding the k best matches of a small query tree, e.g., a DBLP article
with 15 nodes, in a large document tree, e.g., DBLP with 26M nodes,
using the canonical tree edit distance as a similarity measure between
subtrees. Evaluating the tree edit distance for large XML trees is
difficult: the best known algorithms have cubic runtime and quadratic
space complexity, and, thus, do not scale. Our solution is
TASM-postorder, a memory-efficient and scalable TASM algorithm. We
prove an upper-bound for the maximum subtree size for which the tree
edit distance needs to be evaluated. The upper bound depends on the
query and is independent of the document size and structure. A core
problem is to efficiently prune subtrees that are above this size
threshold. We develop an algorithm based on the prefix ring buffer that
allows us to prune all subtrees above the threshold in a single
postorder scan of the document. The size of the prefix ring buffer is
linear in the threshold. As a result, the space complexity of
TASM-postorder depends only on k and the query size, and the runtime of
TASM-postorder is linear in the size of the document. Our experimental
evaluation on large synthetic and real XML documents confirms our
analytic results.
- Materials: -
- WebPage: -
-
- Speaker: Denilson Barbosa (University of Alberta, Edmonton)
- Dates: Thursday 07/01/2010, h. 3.00 pm.
- Room: N7 DIA
- Abstract: We advocate an automated approach for verifying mappings between
source and target databases in which semantics are taken into
account, and that avoids two serious limitations of current
verification approaches: reliance on availability of sample source
and target instances, and reliance on strong statistical assumptions.
We discuss how our approach can be integrated into the workflow
of state-of-the-art mapping design systems, and all its necessary
inputs. Our approach relies on checking the entailment of
verification statements derived directly from the schema mappings
and from semantic annotations to the variables used in such
mappings. We discuss how such verification statements can be
produced and how such annotations can be extracted from different
kinds of alignments of schemas into domain ontologies.
Such alignments can be derived semi-automatically; thus, our
framework might prove useful in also greatly reducing the amount
of input from domain experts in the development of mappings.
- Materials: -
- WebPage: -
-
- Speaker: Denilson Barbosa (University of Alberta, Edmonton)
- Dates: Wednesday 06/30/2010, h. 3.00 pm.
- Room: N3 DIA
- Abstract: Social network analysis aims at uncovering and understanding the
structures and patterns resulting from social inter- actions among
individuals and organizations engaged in a common activity. Since the
early days of the field, networks are modeled as graphs modeling social
actors and the relations between them. The field has become very active
with the maturity of computational machinery to handle large-scale
graphs, and, more recently, the automated gathering of social data. This
talk will describe ReaSoN: an ongoing work towards building a system for
extracting, visualizing and exploring social networks. The talk will
focus on the underlying infrastructure behind ReaSoN, the extraction of
social networks from structured citation databases as well as
unstructured social media, some data management issues that arise in
building such systems, notions of network visibility and the processing
of user-defined queries.
As an illustration, the talk covers the first incarnation of ReaSoN: a
social network resulting from academic research built from ACM DL, DBLP
and Google Scholar data. In doing so, ReaSoN contributes to the
understanding as well as fostering of the social networks underlying
research.
- Materials: -
- WebPage: -
-
- Speaker: Andrea Cali' (Brunel University), Davide Martinenghi (Politecnico di Milano)
- Dates: Tuesday 06/10/2010, h. 11.30 am.
- Room: N7 DIA
- Abstract: The term Deep Web refers to the data content that is created dynamically as
the result of a specific search on the Web. In this respect, such content
resides outside web pages, and is only accessible through interaction with the
web site – typically via HTML forms. It is believed that the size of the Deep
Web is several orders of magnitude larger than that of the so-called Surface
Web, i.e., the web that is accessible and indexable by search engines.
Usually, data sources accessible through web forms are modeled by relations
that require certain fields to be selected – i.e., some fields in the form
need to be filled in. These requirements are commonly referred to as access
limitations in that access to data can only take place according to given
patterns. Besides data accessible through web forms, access limitations may
also occur i) in legacy systems where data scattered over several files are
wrapped as relational tables, and ii) in the context of Web services, where
similar restrictions arise from the distinction between input parameters and
output parameters. In such contexts, computing the answer to a user query
cannot be done as in a traditional database; instead, a query plan is needed
that provides the best answer possible while complying with the access
limitations. In these talks, we illustrate the semantics of answers to queries
over data sources under access limitations and present techniques for query
answering in this context. We show different techniques to optimize query
answering both at the time of the query plan generation and at the time of the
execution of the query plan. We analyze the influence of integrity constraints
on the sources, of the kind that is usually found in database schemata, on
query answering. We present prototype systems that are aimed at querying the
deep web, and show their achievements.
- Materials: pdf (~4.93 MB)
- WebPage: -
-
- Speaker: Beniamino Di Martino (Seconda Universita' di Napoli)
- Dates: Thursday 03/23/2010, h. 2.30 pm.
- Room: Conferences Room DIA
- Abstract: -
- Materials: -
- WebPage: -
-
- Speaker: Prof. Michael Kaufmann (University of Tubingen)
- Dates: Thrusday 03/18/2010, h. 3.00 pm.
- Rooms: Conferences Room DIA
- Abstract: -
- Materials: -
- WebPages: -
-
- Speaker: Prof. Renee Miller (University of Toronto)
- Dates: Monday 11/09/2009, h. 2.30 pm
- Room: N1 DIA
- Abstract: There is a great deal of research on schema and ontology integration
which makes use of rich logical constraints to reason about the
structural and logical alignment of schema and ontologies. There is
also considerable work on matching data instances from heterogeneous
schema or ontologies. However, little work exploits the fact that
ontologies include both data and structure. We provide a first step
in closing this gap with a new algorithm (Iliads) that integrates both
data matching and logical reasoning to achieve better matching of
ontologies with the schema and data matchings found by human
reviewers. We compare against two systems - the ontology matching
tool FCA-merge and the schema matching tool COMA++. This is
preliminary work in this area and the talk will highlight further
opportunities for integrating data matching (including entity
resolution) with schema matching and mapping. This is joint work with
Octavian Udrea and Lise Getoor of the University of Maryland which
appeared in SIGMOD 2007.
- Materials: -
- WebPage: -
-
- Speaker: -
- Dates: Saturday 10/31/2009 , h: 08:00 am
- Room: Rooms N10 - N11 DIA
- Abstract: -
- Materials: -
- WebPage:Program
-
- Speaker: Prof. Katsushi Ikeuchi (University of Tokyo)
- Dates: Thursday 10/08/2009, h: 03:00 pm
- Room: Conferences Room DIA
- Abstract:We have been developing the paradigm referred to as programming-by-demonstration.The method involves simple observation of what a human is doing and generation of robot programs to mimic the same operations. The first half of this talk presents the history of what we have done so far under this paradigm. Here, we emphasize the top-down approach to utilize pre-defined, mathematically derived, task-and-skill models for observing and mimicking human operations. We will show several examples of task-and-skill models applicable in different domains. Then, the second half focuses on our newest effort to make a humanoid robot dance Japanese folk dances using the same paradigm. Human dance motions are recorded using optical or magnetic motion-capture systems. These captured motions are segmented into tasks using motion analysis, music information, and task-and-skill models. We can characterize personal differences of dance using task-and-skill models. Then, we can map these motion models onto robot motions by considering dynamic and structural differences between human and robot bodies. As a demonstration of our system, I will show a video in which a humanoid robot performs a Japanese folk dance.
- Materials: -
- WebPage: -
-
- Speaker: Prof. Seok-Hee Hong (University of Sydney)
- Dates: Thursday 10/01/2009, h: 04:00 pm
- Room: Conferences Room DIA
- Abstract:Graph Drawing has attracted much attention over the last twenty years
due to its wide range of applications such as VLSI design, social networks, software engineering and bioinformatics.
A straight-line drawing is called a "convex drawing" if every facial cycle is drawn as a convex polygon.
Convex representation of graphs is a well-established aesthetic in Graph Drawing, however not all planar graphs admit a convex drawing as observed by Steinitz, Tutte and Thomassen earlier.
In this talk, we introduce two new notions of drawings, "inner-convex drawings" and "star-shaped drawings" , as natural extensions of convex drawings.
We present various results including characterisation, testing, embedding and drawing algorithms.
Our results extend the classical results by Tutte, and include results by Thomassen and Steinitz as a special case.
- Materials: -
- WebPage: -
-
- Speaker: Vagelis Hristidis (Florida International University)
- Dates: Tuesday 07/14/2009, h: 11:30 am
- Room: Conferences Room DIA
- Abstract: As the amount of available data increases, the problem of information discovery, often referred to as finding the needle in the haystack problem, becomes more pressing. The most successful search applications today are the general purpose Web search engines and the well-structured database querying (e.g., SQL). Directly applying these two search models to specific domains is ineffective since they ignore the domain
semantics e.g., meaning of object associations e.g., a biologist wants to see different results from a physician for the same query on PubMed. We present challenges and techniques to achieve effective information discovery on vertical domains by modeling the domain semantics and its users, and exploiting the knowledge of domain experts. Our focal domains are products marketplace, biological data, clinical data, and bibliographic data. This project is being funded by NSF
- Materials: -
- WebPage: -
-
- Speaker: Chair: Federica Pascucci (DIA). Speakers: Alessandro Saffiotti (Orebro University, Sweden), Maurizio Di Rocco (DIA), Attilio Priolo (DIA), Andrea Gasparri (DIA), Paolo Stegagno (University of Rome "La Sapienza"), Fabrizio Flacco (University of Rome "La Sapienza"), Marilena Vendittelli (University of Rome "La Sapienza")
- Dates: Wednesday 05/06/2009, h: 9:30 am
- Room: Room N3, DIA
- WebPage: WebPage
-
- Speaker: José Hernández-Orallo (Universidad Politécnica de Valencia)
- Dates: Monday 05/18/2009, h: 11:30 am
- Room: Conferences Room DIA
- Abstract: Outline: Motivation. Data Mining Models.Model Calibration. Context (ROC) Analysis. Model Combination. Model Simplification, Mimetic Models. General idea. Specific implementations for calibration, adaptation, combination, simplification and revision. Conclusions.
- Materials: Materials
- WebPage: WebPage
-
- Speaker:
- Roberto Contro (LaBS-Laboratory of Biological Structural Mechanics, Politecnico di Torino)
- Marcelo Epstein (Faculty of Mechanical and Manufacturing Engineering & Adjunct Professor, Faculty of Kinesiology and Humanities, University of Calgary)
- Antonio Di Carlo (LaMS-Modelling & Simulation Lab, University of Rome "Roma Tre")
- Dates: Thursday 02/19/2009, h: 10:00 am
- Aula: Seminars Room (Third floor) Dipartimento di Strutture - via Corrado Segre 6
- Abstract: -
- Materials: -
- WebPage: -
Last update: 2010/12/17