Seminari Proposti Presso La Scuola Dottorale Di Ingegneria
-
- Docente: Hans Kellerer
- Data: Lunedì 20 Dicembre 2010, ore 14:30
- Aula: Sala Riunioni DIA, 1° piano
- Abstract: We design a fully polynomial-time approximation scheme (FPTAS) for a single machine scheduling problem to minimize the total weighted
earliness and tardiness with respect to a common restrictive due date. Notice for this problem no constant-ratio approximation algorithm
has been known so far. Our approach is based on adopting an FPTAS for a special version of the knapsack problem to minimize a convex quadratic
non-separable function. For the continuous relaxation of such a knapsack problem we give an algorithm of a quadratic time complexity. The running
time of each presented FPTAS is strongly polynomial.
Per ulteriori informazioni rivolgersi a G. Nicosia (nicosia@dia.uniroma3.it, tel. 06 57333455).
- Materiale: -
- Pagina Web: -
-
- Docente: Fabrizio Frati
- Data: Giovedì 21 Ottobre 2010, ore 17.30
- Aula: Sala Riunioni DIA, 1° piano
- Abstract: We prove that planar graphs have $O(\log^4 n)$ queue number, thus
improving upon the previous $O(\sqrt n)$ upper bound. Consequently,
planar graphs admit 3D straight-line crossing-free grid drawings in $O(n
\log^c n)$ volume, for some constant $c$, thus improving upon the
previous $O(n^{3/2})$ upper bound.
This is a joint work with Giuseppe Di Battista (Roma Tre University) and
Janos Pach (EPFL Lausanne and Renyi Institute Budapest)
The same presentation will be given at FOCS 2010, 50th Annual IEEE
Symposium on Foundations of Computer Science, Las Vegas, October 23-26, 2010
- Materiale: -
- Pagina Web: -
-
- Docente: Massimo Rimondini (DIA)
- Data: Giovedì 16 Settembre 2010, ore 12.00
- Aula: Sala Riunioni DIA, 1° piano
- Abstract: Compliance with the Gao-Rexford conditions is perhaps the most
realistic explanation of Internet routing stability, although BGP is
renowned to be prone to oscillations. Informally, the Gao-Rexford
conditions assume that (i) the
business relationships between Internet Service Providers (ISPs) yield a
hierarchy, (ii) each ISP behaves in a rational way, i.e., it does not
offer transit to other ISPs for free, and (iii) each ISP ranks routes
through customers better than routes through providers and peers.
We show an efficient algorithm that, given a BGP configuration, checks
whether there exists an assignment of peer-peer
and customer-provider relationships that complies with the Gao-Rexford
conditions. Also, we show that preferring routes through peers to those
through providers, although more suitable than the original formulation
of Condition (iii) to describe the business relationships between ISPs,
makes the problem NP-hard.
The above results hold both in the well known theoretical framework used
to model BGP and in a more realistic setting where (i)
local preferences are assigned on a per-neighbor basis and (ii) transit
is allowed from/to specific neighbor pairs. Observe that the latter
setting, where policy complexity only depends on the number of
neighbors, is very close to the way in which operators typically
configure routers.
This is a joint work with Luca Cittadini (Roma Tre University), Giuseppe
Di Battista (Roma Tre University), Thomas Erlebach (University of
Leicester), and Maurizio Patrignani (Roma Tre University)
The same presentation will be given at the 18th IEEE International
Conference on Network Protocols, Kyoto, Japan, Oct. 5 - 8, 2010
- Materiale: -
- Pagina Web: -
-
- Docente: Denilson Barbosa (University of Alberta, Edmonton)
- Data: Lunedì 5 Luglio 2010, ore 15.00
- Aula: N7 DIA
- Abstract: We consider the top-k Approximate Subtree Matching (TASM) problem:
finding the k best matches of a small query tree, e.g., a DBLP article
with 15 nodes, in a large document tree, e.g., DBLP with 26M nodes,
using the canonical tree edit distance as a similarity measure between
subtrees. Evaluating the tree edit distance for large XML trees is
difficult: the best known algorithms have cubic runtime and quadratic
space complexity, and, thus, do not scale. Our solution is
TASM-postorder, a memory-efficient and scalable TASM algorithm. We
prove an upper-bound for the maximum subtree size for which the tree
edit distance needs to be evaluated. The upper bound depends on the
query and is independent of the document size and structure. A core
problem is to efficiently prune subtrees that are above this size
threshold. We develop an algorithm based on the prefix ring buffer that
allows us to prune all subtrees above the threshold in a single
postorder scan of the document. The size of the prefix ring buffer is
linear in the threshold. As a result, the space complexity of
TASM-postorder depends only on k and the query size, and the runtime of
TASM-postorder is linear in the size of the document. Our experimental
evaluation on large synthetic and real XML documents confirms our
analytic results.
- Materiale: -
- Pagina Web: -
-
- Docente: Denilson Barbosa (University of Alberta, Edmonton)
- Data: Giovedì 1 Luglio 2010, ore 15.00
- Aula: N7 DIA
- Abstract: We advocate an automated approach for verifying mappings between
source and target databases in which semantics are taken into
account, and that avoids two serious limitations of current
verification approaches: reliance on availability of sample source
and target instances, and reliance on strong statistical assumptions.
We discuss how our approach can be integrated into the workflow
of state-of-the-art mapping design systems, and all its necessary
inputs. Our approach relies on checking the entailment of
verification statements derived directly from the schema mappings
and from semantic annotations to the variables used in such
mappings. We discuss how such verification statements can be
produced and how such annotations can be extracted from different
kinds of alignments of schemas into domain ontologies.
Such alignments can be derived semi-automatically; thus, our
framework might prove useful in also greatly reducing the amount
of input from domain experts in the development of mappings.
- Materiale: -
- Pagina Web: -
-
- Docente: Denilson Barbosa (University of Alberta, Edmonton)
- Data: Mercoledì 30 Giugno 2010, ore 15.00
- Aula: N3 DIA
- Abstract: Social network analysis aims at uncovering and understanding the
structures and patterns resulting from social inter- actions among
individuals and organizations engaged in a common activity. Since the
early days of the field, networks are modeled as graphs modeling social
actors and the relations between them. The field has become very active
with the maturity of computational machinery to handle large-scale
graphs, and, more recently, the automated gathering of social data. This
talk will describe ReaSoN: an ongoing work towards building a system for
extracting, visualizing and exploring social networks. The talk will
focus on the underlying infrastructure behind ReaSoN, the extraction of
social networks from structured citation databases as well as
unstructured social media, some data management issues that arise in
building such systems, notions of network visibility and the processing
of user-defined queries.
As an illustration, the talk covers the first incarnation of ReaSoN: a
social network resulting from academic research built from ACM DL, DBLP
and Google Scholar data. In doing so, ReaSoN contributes to the
understanding as well as fostering of the social networks underlying
research.
- Materiale: -
- Pagina Web: -
-
- Docente: Andrea Cali' (Brunel University), Davide Martinenghi (Politecnico di Milano)
- Data: Giovedi' 10 Giugno 2010, ore 11:30
- Aula: N7 DIA
- Abstract: The term Deep Web refers to the data content that is created dynamically as
the result of a specific search on the Web. In this respect, such content
resides outside web pages, and is only accessible through interaction with the
web site – typically via HTML forms. It is believed that the size of the Deep
Web is several orders of magnitude larger than that of the so-called Surface
Web, i.e., the web that is accessible and indexable by search engines.
Usually, data sources accessible through web forms are modeled by relations
that require certain fields to be selected – i.e., some fields in the form
need to be filled in. These requirements are commonly referred to as access
limitations in that access to data can only take place according to given
patterns. Besides data accessible through web forms, access limitations may
also occur i) in legacy systems where data scattered over several files are
wrapped as relational tables, and ii) in the context of Web services, where
similar restrictions arise from the distinction between input parameters and
output parameters. In such contexts, computing the answer to a user query
cannot be done as in a traditional database; instead, a query plan is needed
that provides the best answer possible while complying with the access
limitations. In these talks, we illustrate the semantics of answers to queries
over data sources under access limitations and present techniques for query
answering in this context. We show different techniques to optimize query
answering both at the time of the query plan generation and at the time of the
execution of the query plan. We analyze the influence of integrity constraints
on the sources, of the kind that is usually found in database schemata, on
query answering. We present prototype systems that are aimed at querying the
deep web, and show their achievements.
- Materiale: pdf (~4.93 MB)
- Pagina Web: -
-
- Docente: Beniamino Di Martino (Seconda Universita' di Napoli)
- Data: Martedì 23 Marzo 2010, ore 14.30
- Aula: Sala Riunioni DIA
- Abstract: -
- Materiale: -
- Pagina Web: -
-
- Docente: Professor Michael Kaufmann (University of Tubingen)
- Data: Giovedì 18 Marzo 2010, ore 15.00
- Aula: Sala Riunioni DIA
- Abstract: -
- Materiale: -
- Pagina Web: -
-
- Docente: Professoressa Renee Miller (University of Toronto)
- Data: Lunedì 9 Novembre 2009, ore 14.30
- Aula: N1 piano terra DIA
- Abstract: There is a great deal of research on schema and ontology integration
which makes use of rich logical constraints to reason about the
structural and logical alignment of schema and ontologies. There is
also considerable work on matching data instances from heterogeneous
schema or ontologies. However, little work exploits the fact that
ontologies include both data and structure. We provide a first step
in closing this gap with a new algorithm (Iliads) that integrates both
data matching and logical reasoning to achieve better matching of
ontologies. We evaluate our algorithm on a set of pairs of OWL Lite
ontologies with the schema and data matchings found by human
reviewers. We compare against two systems - the ontology matching
tool FCA-merge and the schema matching tool COMA++. This is
preliminary work in this area and the talk will highlight further
opportunities for integrating data matching (including entity
resolution) with schema matching and mapping. This is joint work with
Octavian Udrea and Lise Getoor of the University of Maryland which
appeared in SIGMOD 2007.
- Materiale: -
- Pagina Web: -
-
- Docente: -
- Data: Sabato 31 Ottobre 2009, ore 8.30
- Aula: Aule N10 - N11 DIA
- Abstract: -
- Materiale: -
- Pagina Web: Programma
-
- Docente: Prof. Katsushi Ikeuchi (University of Tokyo)
- Data: Giovedì 8 Ottobre 2009, ore 15.00
- Aula: Sala Riunioni DIA
- Abstract:We have been developing the paradigm referred to as programming-by-demonstration.The method involves simple observation of what a human is doing and generation of robot programs to mimic the same operations. The first half of this talk presents the history of what we have done so far under this paradigm. Here, we emphasize the top-down approach to utilize pre-defined, mathematically derived, task-and-skill models for observing and mimicking human operations. We will show several examples of task-and-skill models applicable in different domains. Then, the second half focuses on our newest effort to make a humanoid robot dance Japanese folk dances using the same paradigm. Human dance motions are recorded using optical or magnetic motion-capture systems. These captured motions are segmented into tasks using motion analysis, music information, and task-and-skill models. We can characterize personal differences of dance using task-and-skill models. Then, we can map these motion models onto robot motions by considering dynamic and structural differences between human and robot bodies. As a demonstration of our system, I will show a video in which a humanoid robot performs a Japanese folk dance.
- Materiale: -
- Pagina Web: -
-
- Docente: Prof.ssa Seok-Hee Hong (University of Sydney)
- Data: Giovedì 1 Ottobre 2009, ore 16.00
- Aula: Sala Riunioni DIA
- Abstract:Graph Drawing has attracted much attention over the last twenty years
due to its wide range of applications such as VLSI design, social networks, software engineering and bioinformatics.
A straight-line drawing is called a "convex drawing" if every facial cycle is drawn as a convex polygon.
Convex representation of graphs is a well-established aesthetic in Graph Drawing, however not all planar graphs admit a convex drawing as observed by Steinitz, Tutte and Thomassen earlier.
In this talk, we introduce two new notions of drawings, "inner-convex drawings" and "star-shaped drawings" , as natural extensions of convex drawings.
We present various results including characterisation, testing, embedding and drawing algorithms.
Our results extend the classical results by Tutte, and include results by Thomassen and Steinitz as a special case.
- Materiale: -
- Pagina Web: -
-
- Docente: Vagelis Hristidis (Florida International University)
- Data: Martedì 14 Marzo 2009, ore 11.30
- Aula: Sala riunioni DIA
- Abstract: As the amount of available data increases, the problem of information discovery, often referred to as finding the needle in the haystack problem, becomes more pressing. The most successful search applications today are the general purpose Web search engines and the well-structured database querying (e.g., SQL). Directly applying these two search models to specific domains is ineffective since they ignore the domain semantics e.g., meaning of object associations e.g., a biologist wants to see different results from a physician for the same query on PubMed. We present challenges and techniques to achieve effective information discovery on vertical domains by modeling the domain semantics and its users, and exploiting the knowledge of domain experts. Our focal domains are products marketplace, biological data, clinical data, and bibliographic data. This project is being funded by NSF
- Materiale: -
- Pagina Web: -
-
- Docente: Chair: Federica Pascucci (DIA). Speakers: Alessandro Saffiotti (Universita' di Orebro, Svezia), Maurizio Di Rocco (DIA), Attilio Priolo (DIA), Andrea Gasparri (DIA), Paolo Stegagno (La Sapienza), Fabrizio Flacco (La Sapienza), Marilena Vendittelli (La Sapienza)
- Data: 6 maggio 2009, ore 9.30
- Aula: Aula N3, DIA
- Pagina Web: Pagina Web
-
- Docente: José Hernández-Orallo (Universidad Politécnica de Valencia)
- Data: Lunedì 18 maggio 2009, ore 11.30
- Aula: Sala riunioni DIA
- Abstract: Outline: Motivation. Data Mining Models. Model Calibration. Context (ROC) Analysis. Model Combination. Model Simplification, Mimetic Models. General idea. Specific implementations for calibration, adaptation, combination, simplification and revision. Conclusions.
- Materiale: Materiale
- Pagina Web: Pagina Web
-
- Docente:
- Roberto Contro (LaBS-Laboratory of Biological Structural Mechanics, Politecnico di Torino)
- Marcelo Epstein (Faculty of Mechanical and Manufacturing Engineering & Adjunct Professor, Faculty of Kinesiology and Humanities, University of Calgary)
- Antonio Di Carlo (LaMS-Modelling & Simulation Lab, Università Roma Tre)
- Data: Giovedì 19 febbraio, ore 10
- Aula: Aula Seminari (Terzo piano) Dipartimento di Strutture - via Corrado Segre 6
- Abstract: -
- Materiale: -
- Pagina Web: -
Ultimo aggiornamento: 17/12/2010