سخنرانان کلیدی

 

    - پروفسور Georg Gottlob

    - دانشگاه آکسفورد، انگستان

 

 

 

 

 

بیوگرافی

دکتر جورج گاتلاب دارای رتبه استادی در دانشگاه آکسفورد است. وی در سالهای 2006 تا 2011 رئیس دانشکده علوم کامپیوتر دانشگاه آکسفورد بوده است و همینک رئیس انفورماتیک دانشگاه آکسفورد است.

از  مهم ترین زمینه های پژوهشی پروفسور جورج گاتلاب می توان از استخراج داده از وب، پایگاه داده، گراف و تئوری پیچیدگی نام برد.

وی در حال حاضر عضو هیات تحریریه چند مجله معروف نظیر:

- Journal of Computer and System Sciences, Artificial Intelligence, Web Intelligence and Agent Systems (WIAS)

- Journal of Applied Logic, and Journal of Discrete Algorithms

می باشد.

خلاصه سخنرانی پروفسور Georg Gottlob

Extracting Big Data from the Deep Web: Technology, Research, and Business

Do you need to rent a new apartment fulfilling certain requirements? Or would you just like to find a restaurant in your area that serves pasta al pesto as today’s special? In either case, you would most likely start a web search, but keyword search as provided by current search engines is not really appropriate. The relevant data (at least for apartments) will reside in the Deep Web and requires forms to be automatically filled. Moreover, a keyword Web search  does not allow you to pose complex queries. Solving this problem, at least for certain verticals such as real estate, used cars, restaurants, requires the extraction of massive data from heterogeneously structured websites of the Deep Web, and the storage of the data into a database having a uniform schema.

In this talk I will report about my 15 years long venture into Web data extraction. In particular, I will discuss the Lixto project we carried out at TU Wien, and the DIADEM ERC project we recently accomplished at Oxford. I will survey the tools and systems we constructed applications we carried out, and also some research results about the logical and theoretical foundations of Web data extraction we achieved. In addition, I will report about two start-ups we spun out.

 




 

    - پروفسور Gabriella Pasi

    - استاد تمام دانشکده انفورماتیک و ارتباطات دانشگاه میلان ایتالیا

بیوگرافی

پروفسور گابریلا پاسی استاد تمام دانشکده انفورماتیک و ارتباطات دانشگاه میلان ایتالیا است.تخصص ویژه ایشان در حوزه بازیابی اطلاعات و فیلترینگ اطلاعات است. همچنین  ایشان چندین پروژه  و پژوهش در حوزه تولید محتوا توسط کاریر در شبکه های اجتماعی انجام داده است.

پروفسور پاسی بیش از 200 مقاله در مجلات و کنفرانس های معتبر منتشر کرده است و دبیر و سخنران کلیدی چندین کنفرانس معتبر در سطح اتحادیه اروپا یی بوده است. از سال 2013 وی رئیس انجمن منطق فازی در اروپا می باشد و ضمن عضویت در هیات تحریریه چندین مجله مشهور، سردبیر مشترک دو مجله زیر است:

- International Journal of Computational Intelligence Systems (IJCIS), Atlantis Press (since 2007).

-  Journal of Intelligent and Fuzzy Systems (JIFS), IOS Press (since 2013)

جهت کسب اطلاعات بیشتر به صفحه پروفسور گابریلا پاسی مراجعه فرمایید.

 

خلاصه سخنرانی پروفسور Gabriella Pasi

The issue of Information Credibility on the Social Web

In the scenario of the Social Web, where a large amount of User Generated Content is diffused through Social Media often without any form of trusted external control, the risk of running into misinformation is not negligible. For this reason, assessing the credibility of both information objects and sources of information constitutes a fundamental issue for users. Credibility, also referred as believability, is a quality perceived by individuals, who are not always able to discern with their own cognitive capabilities genuine information from fake one. For this reason, in the last years several approaches have been proposed to automatically assess credibility in Social Media. Most of them are based on data-driven approaches, i.e., they employ machine learning techniques to identify misinformation, but recently also model-driven approaches are emerging. Data-driven approaches have proven to be effective in detecting false information, but in these approaches it is difficult to measure the contribution that each involved feature has in terms of credibility assessment. Furthermore, especially for supervised machine learning approaches, it is difficult to deal with real-life datasets labeled with respect to credibility, in particular when referring to opinion spam. Model-driven approaches aim at defining a predictive model based on an analysis of the problem and of the identified objects and their features; in particular, approaches relying on a Multi Criteria Decision Making paradigm constitute a way to compute an overall credibility assessment associated with a given information object (posts and blogs) by separately evaluating each feature connected to each alternative, and by subsequently aggregating the single assessments into an overall one. Several classes of aggregation operators can be employed to obtain the overall credibility estimate, thus modeling distinct behaviors of the considered process, corresponding to distinct predictive models.  Furthermore, some aggregation operators allow model the interaction between criteria, as for example the Choquet integrals and copulas. In this lecture the impact of aggregation will be shown in the context of assessment of the credibility of user generated content. In particular, it will be shown that quantifier guided aggregation offers an interesting alternative to the application of machine learning techniques (in particular classifiers).




 

    - پروفسور Frank van Harmelen

    - استاد دانشکده علوم کامپیوتر دانشگاه ، Vrije  آمستردام هلند

 

 

 

 

 

بیوگرافی

پروفسور هارملن استاد دانشکده علوم کامپیوتر دانشگاه Vrije   آمستردام هلند است. وی  بعنوان یک پژوهشگر برجسته به ویژه در حوزه وب معنایی در اروپا و جهان شناخته می شود و اولین پروژه وب معنایی در سطح اتحادیه اروپا توسط ایشان انجام شده است و بعد از آن در پروژه های بزرگ و معروفی مشارکت داشته است.  وی از پایه گذاران زبان معروف هستان شناسی وب، OWL است. پروفسور هارملن نویسنده مشترک کتاب Semantic Web Primer است که به عنوان یکی از معروفترین کتاب های حوزه وب معنایی شناخته می شود. جهت کسب اطلاعت  و جزئیات بیشتر به صفحه پروفوسور هارملن مراجعه فرمایید.

 

خلاصه سخنرانی پروفسور Frank van Harmelen

Very (VERY) Large-Scale Knowledge Representation

In the past 15 years, the field of knowledge representation has seen a major breakthrough, resulting in distributed knowledge-bases containing billions of formal statements statements about hundreds of millions of objects, ranging from medicine to science, from politics to entertainment and from specialist to encyclopaedic.
As a consequence of this enormous increase in size, researchers in knowledge representation have been forced to reconsider many of the assumptions that were (often silently) made in traditional knowledge representation research: we can no longer assume that our knowledge-bases are consistent, we can no longer assume that they use a single homogeneous vocabulary, we can no longer assume they are static (or even that they evolve only slowly), we can no longer assume we can simply do logical deductions on the entire knowledge base, etc.  How to define notions of local consistency? How to interpret conclusions if the axioms change even before the reasoning engine finishes? Can we exploit the network structure of knowledge-bases to finally define a useful notion of "context"?
In this talk we will discuss the challenges that are raised for modern research in knowledge representation now that KR finally has to face the real world, and can no longer rely on many of its previous comforting assumptions
.