Abstracts of talks
|
Invited speakers |
50 min. talks |
|
|
|
|
New findings in the theory of Clarke Jacobians |
Marián Fabian |
Let \( \mathcal P\) be any convex compact set consisting of \( m\times n\) matrices. We present a 'fractal' construction of a Lipschitzian mapping from \( R^n\) to \( R^m\) whose Clarke Jacobian at the origin is equal to \( \mathcal P\) |
|
Couthor(s): David Bartl |
|
Computational aspects of optimal experimental designs |
Lenka Filova |
In the talk, we will review various classes of algorithms proposed to compute efficient experimental designs. We also introduce our R package OptimalDesign which provides a toolbox for the computation of D- , A-, I-, and c-efficient exact and approximate designs of experiments on finite domains, for regression models with real-valued, uncorrelated observations, with multiple linear constraints on the vector of design weights. We will demonstrate how to apply the procedures to several practical optimal design situations. In addition, we will show that some well-known problems from computational geometry, matrix theory and graph theory are special cases of the general problem of optimal experimental design. We will discuss to which extent these problems can be solved via existing algorithms used for computing optimum experimental designs. |
|
Couthor(s): Radoslav Harman |
|
Robustness aspects for the statistical analysis related to industrial applications |
Peter Filzmoser |
The term 'big data' has become a buzzword in the last years, and it refers to the possibility to collect and store huge amounts of information, resulting in big data bases and data repositories. This also holds for industrial applications: In a production process, for instance, it is possible to implement many sensors and record data in a very high temporal resolution. The amount of information grows rapidly, but not necessarily does the insight into the production process. This is the point where machine learning or, say, statistics needs to enter, because sophisticated algorithms are now required to identify the relevant parameters which are the drivers of the quality of the product, as an example. However, is data quality still an issue? It is clear that with small amounts of data, single outliers or extreme values could affect the algorithms or statistical methods. Can 'big data' overcome this problem? Or are more sophisticated machine learning algorithms the solution? In the presentation we will focus on some specific problems in the regression context, and show that even if many parameters are measured, poor data quality can severely influence the prediction performance of the methods. |
|
|
|
Set-valued \( T\) -translative functions and their applications in finance |
Andreas Hamel |
Let \( X\) and \( Z\) be non-trivial, real linear spaces. The power set of \( Z\) is denoted by \( \mathcal P(Z)\) (including \( \emptyset\) ). Moreover, let \( T \colon Z \to X\) be an injective linear operator. A function \( f \colon X \to \mathcal P(Z)\) (often called a multi-valued or set-valued map or a correspondence) is called \( T\) -translative if \[ \forall x \in X, \; \forall z \in Z \colon f(x + Tz) = f(x) + \{z\}. \] In this talk, the theory of T-translative functions is presented with topics such as closedness and semicontinuity, convexity, representations via set-valued infimal convolutions as well as dual representations. Moreover, it is shown that a wide range of (known as well as new) applications is covered such as projections in Hilbert spaces, aggregation mappings as generalizations of well-known scalarization procedures in vector optimization, quantile functions and risk measures for multivariate positions, lower and upper expectations also in the multivariate case and sets of super/sub-hedging portfolios in market models with transactions costs. The theory relies on the complete lattice approach to set optimization, i.e., the image spaces of \( T\) -translative functions will be complete lattices of sets in which infima and suprema make sense and have meaningful interpretations, e.g., in financial applications (in contrast to infima/suprema in multi-criteria and vector optimization). |
|
|
|
Impact of Digitalization on Productivity: Non-parametric Approach |
Mikuláš Luptáčik |
Digitalization and Industry 4.0 (Internet of things) are in the focus of wide and intensive public debates as well as increasing scientific theoretical and empirical research. The digital economy and Industry 4.0 offers both new opportunities as well as new challenges: New technologies can generate increased productivity and profitability, it increases the convenience of market transactions, the creation of new and weightless goods and services produced at near zero marginal cost. Fundamental implications can be expected for the labour market. Crucial for the consideration of the impact of digitalization for the labour market is the relationship between digitalization and productivity change. “wieweit der von manchen vermutete künftige Digitalisierungssprung einen Produktivitätssprung auslöst, der großflächig Arbeitsplätze vernichtet“ (Tichy, (2016), p. 864). In order to analyse the impact of digitalization on productivity, we try to answer first the following question: How to measure the level of digitalization and its changes over time? We provide a new composite indicator based on Data Envelopment Analysis (DEA) overcoming the undesirable dependence of final result from the preliminary normalization of sub-indicators (used for the creation of the well-known and widely used DESI indicator) and the subjective nature of the weights used for aggregating. Then we try to answer the following next questions: How does the adoption of industrial robots (as the most advanced area of automation and one of the key elements of the present digital transformations) contribute to the productivity growth? What is the contribution of increasing automation for substitution of machines for labor? How does the adoption of the elements of the digital economy going beyond automation contribute to the productivity growth? Using the decomposition based on Henderson – Russell for the period 1995 – 2015 for the sample of 26 countries positive contribution of the change robots to the GDP growth for the almost all countries ( in average by 15 %) can be estimated. Based on data envelopment analysis (DEA) on the country level with labour, capital and number of robots as the inputs and GDP as the output the impact of automation on labour use is investigated. Adopting and modifying the methodology by Färe et al.(2018) the change in the labour input and number of robots will be decomposed into factors attributable to changes in technical efficiency, technology change, input mix as well as the output. In this way the impact of automation for labour input (substitution of labour by capital) can be estimated. Using the indicators of the five areas of digitalization (data from the Report of EC 2020) as the inputs (besides of capital and labour) deeper insights provided by the particular areas of digital performance can be provided. The results show that significant impact is generated by area 5: digital public services, in average by 2%, (in several countries more than 5%) and by area 4: Integration of digital technologies by business, in average more than 1% (in Finland more than 6%),which implies higher contribution than change in labour input. |
|
|
|
On Using Triples to Assess Symmetry Under Weak Dependence |
Marián Vávra |
The problem of assessing symmetry about an unspecified center of the one-dimensional marginal distribution of strictly stationary random processes is considered. A well-known U-statistic based on data triples is used to detect deviations from symmetry, allowing the underlying process to satisfy suitable mixing or near-epoch dependence conditions. We suggest using subsampling for inference on the target parameter, establish the asymptotic validity of the method in our setting, and discuss data-driven rules for selecting the size of subsamples. The small-sample properties of the proposed inference procedures are examined by means of Monte Carlo simulations and an application to time series of real output growth is also presented. |
|
Couthor(s): Zacharias Psaradakis |
|
Participants |
25 min. talks |
|
|
|
|
On the non-emptiness of the core of a cooperative game: a generalization of the Bondareva-Shapley Theorem |
David Bartl |
Consider a transferable utility cooperative game \( v\colon {\cal P}(N) \to {\bf R}\) , where \( N = \{1, \ldots, n\}\) is the set of the players and the potency set \( {\cal P}(N) = \{\, K : K \subseteq N \,\}\) is the collection of the coalitions that can potentially be formed. The value \( v(K)\) is the total profit of the coalition \( K\) once it is formed. The coalition structure \( {\cal S}\) is a partition of the player set \( N\) ; that is, a collection \( {\cal S} \subseteq {\cal P}(N)\) of coalitions such that \( \emptyset \notin {\cal S}\) and \( K \cap L = \emptyset\) whenever \( K, L \in {\cal S}\) are distinct, and also \( \bigcup_{K\in{\cal S}} = N\) . The payoff vector \( a \in {\bf R}^n\) describes the distribution of the coalitions' profits among the players. The core of the game \( v\) with respect to the coalition structure \( {\cal S}\) is the set \( {\cal C} = \bigl\{\, a \in {\bf R}^n : \sum_{i\in K} a_i \geq v(K)\) for every \( K \in {\cal P}(N) \setminus {\cal S}\) and \( \sum_{i\in K} a_i = v(K)\) for every \( K \in {\cal S} \,\bigr\}\) . Given the coalition structure consisting of the grand coalition only, that is \( {\cal S} = \{N\}\) , the Bondareva-Shapley Theorem states that the core \( {\cal C}\) of the game is non-empty if and only if the game is balanced. We present the Bondareva-Shapley Theorem for a general coalition structure \( {\cal S}\) . |
|
|
|
A Semismooth* Newton Method for Inclusions and Tame Optimization |
Matúš Benko |
Many important problems of mathematical analysis require to shift our focus from the standard equations to a broader area of generalized equations (inclusions) \( 0\in F(x)\) , where \( F\) is a set-valued mapping. Particularly, when minimizing a smooth function \( f\) , basic optimality conditions at \( \bar x\) states that \( \nabla f(\bar x)=0\) . If \( f\) is nonsmooth, or if it is minimized over a subset \( \Gamma\) of the whole space, however, we end up with inclusions \( 0\in\partial f(\bar x)\) or \( 0\in\nabla f(\bar x)+N_\Gamma(\bar x)\) , respectively. Here \( \partial f\) is a suitable subdifferential of \( f\) and \( N_\Gamma\) denotes a suitable normal cone mapping. Various Newton-type methods for solving inclusions \( 0 \in F(x)\) have been proposed, in which ''linearization'' concerns only a single-valued part of \( F\) (\( F\) is often a sum of a single-valued and a set-valued part). In order to obtain superlinear convergence, the notion of semismoothness of functions turned out to be essential. Recently, a new concept of semismoothness* for set-valued mappings was introduced and used to design a Newton method, in which also the set-valued part is ''linearized''. In this talk, we further explore semismoothness* and provide its equivalent characterization in terms of primal generalized derivatives, which can perhaps be a bit more intuitive then the original definition via dual objects. More importantly, we connect semismoothness* with an interesting area of tame optimization, in which semialgebraic structures secure a suprisingly nice behavior of such optimization problems. Particularly, we provide a straightforward extension of the standard semismoothness of tame functions to semismoothness* of tame set-valued mappings. We further show that the tame setting is very suitable also with respect to the other assumptions of the semismooth* Newton method. |
|
|
|
Rank estimators in robust linear regression via linear programming |
Michal Černý |
Rank estimators for linear regression models are defined as minimizers of Jaeckel's disperion function \( F\) . As observed by Osborne, minimization of \( F\) can be written as a linear programming problem with \( n!\) constraints, where \( n\) stands for the number of observations. This property prohibits the usage of standard LP techniques, such as the Simplex Method or Interior Point Methods. We design two new algorithms for solving this problem. The first method can be proven to work in unconditional polynomial time. It is based on the theory of the Ellipsoid Method with membership and separation oracle as developed by Grótschel, Lovász and Schrijver. The latter method, called Walk-on-arrangements, exhibits the geometry of hyperplane arrangements as studied in combinatorial geometry. The method can be shown to be polynomial under the assumption \( p = O(1)\) , where \( p\) stands for the number of regressors. We compare the methods according to both theoretical and practical performance. We also point out some implementation issues concerning the R-package Rfit and discuss certain differences between our approach and other iterative techniques used in this area, such as Iteratively Reweighted Least Squares. The talk is based on a series of papers [M. Černý, M. Rada, J. Antoch and M. Hladík, A class of optimization problems motivated by rank estimators in robust regression, Optimization (latest articles), https://doi.org/10.1080/02331934.2020.1812604 ], [M. Černý, M. Hladík and M. Rada, Walks on hyperplane arrangements and optimization of piecewise linear functions, submitted to Mathematical Methods of Operations Research, preprint: https://arxiv.org/abs/1912.12750] and [J. Antoch, M. Černý and R. Miura, Selected algorithms for \( R\) -regression estimators, submitted to Computational Statistics]. |
|
Couthor(s): Jaromír Antoch, Miroslav Rada, Milan Hladík |
|
Effects of immigration on public finances |
Tomas Domonkos |
We estimated the fiscal effects of migration in Slovakia. The methodology employed is the Generational account disaggregated with respect to native and foreign population. Additionally, we consider in the model gender and the level of education as well. We present the static as well as the long-term dynamic approach. Demographic projections used are disaggregated to migrants and to natives. The projections are from 2018 to 2200. We test five alternative scenarios i) no migration from 2018; ii) most likely migration scenario; iii) high migration scenario; iv) all foreigner in the country have high level of education; v) all foreigner in the country have low level of education. The results showed that foreigners have slightly lower net taxes then the natives. However, thanks to a younger age distribution of the foreign population one foreigner contributes on average more to the public purse than one native does on average. We furthermore showed that migration can have positive fiscal impact, however, the age distribution and the educational structure of the foreign population along the number of foreigners are among the key parameters. On the aggregate level, the total effects seem to be rather small compared to the overall long-term fiscal imbalance caused by ageing in Slovakia. (Supported by APVV-19-0352 and VEGA 2/0143/21) |
|
|
|
Deep Smoothness WENO method with applications in finance |
Matthias Ehrhardt |
We present the novel deep smoothness weighted essentially non-oscillatory (WENO-DS) method and its application in finance. In order to improve the existing WENO method, we apply a deep learning algorithm to modify the smoothness indicators of the method. This is done in such a way that the consistency and convergence of the method is preserved. We present our results on an example of a European digital option. Here we avoid the spurious oscillations, especially in the first time steps of the numerical solution. |
|
Couthor(s): Tatiana Kossaczká, Michael Günther |
|
Searching for low-rank solutions to semidefinite problems with a specific structure |
Terézia Fulová |
In real-life applications, it is not rare to encounter optimization problems with matrix variables, in which we are interested in finding a low-rank optimal solution. Such problems arise, e.g. in sensor network localization, multidimensional scaling, graph theory, factor model calibration. Optimization problems involving rank function are NP-hard in general. Although several convex relaxations, heuristics, and rank-reduction algorithms for solving rank-constrained semidefinite problems have been introduced, they do not always guarantee to find a low-rank optimal solution. For instance, the well-known trace heuristic cannot be applied to problems involving matrices with constant diagonal. In this contribution, we present new approaches for solving rank-constrained semidefinite problems involving matrices with a specific structure. |
|
|
|
Maximum likelihood parameter estimation for discrete state space and continuous time stochastic processes |
Ján Gašper |
Maximum likelihood estimation of parameters is one of the standard ways of model calibration. In case of discrete state space and continuous time, this is commonly achieved by numerical simulations or by solving so-called master equation. Master equation can become very high dimensional and therefore difficult to solve. In our approach we exploit sparse nature of master equation to get fast, but accurate results. |
|
|
|
Lower Bounds and Optimal Algorithms for Personalized Federated Learning |
Slavomír Hanzely |
In this work, we consider the optimization formulation of personalized federated learning recently introduced by Hanzely and Richtárik (2020) which was shown to give an alternative explanation to the workings of local SGD methods. Our first contribution is establishing the first lower bounds for this formulation, for both the communication complexity and the local oracle complexity. Our second contribution is the design of several optimal methods matching these lower bounds in almost all regimes. These are the first provably optimal methods for personalized federated learning. Our optimal methods include an accelerated variant of FedProx, and an accelerated variance-reduced version of FedAvg/Local SGD. We demonstrate the practical superiority of our methods through extensive numerical experiments. |
|
Couthor(s): Filip Hanzely, Samuel Horváth, Peter Richtárik |
|
On linear programming with multiple uncertain objectives and uncertain weights |
Milan Hladík |
Real world problems are usually subject to various kinds of uncertainty or inexactness, which has to be taken into account. Uncertain input data can be modelled by various ways, depending on the nature of the uncertainty. We follow the ideas of interval analysis, which assumes that the only information we have are intervals covering the true values. That is, we have no additional information on the distribution on intervals (in contrast to stochastic or fuzzy programming). In particular, we study linear programming problems with multiple objective functions. Multiple objectives are usually scalarized by using weights, provided by a user or somehow estimated. We suppose that the objective coefficients and the weights are those that are not known exactly and only interval enclosures are provided. These two types of interval values naturally lead to several concepts of definitions of Pareto optimal solutions. We discuss these concepts and compare them to each other. For each of them, we attempt to characterize the corresponding kind efficiency for two cases - a basic solution and a general feasible solution. We also investigate computational complexity of deciding whether a given solution is efficient. It turns out that some concepts are polynomially decidable, whereas the others are NP-hard. So each of them is classified in the complexity P vs NP way. All the above characterizations and complexity classification are developed for a given feasible solution. We left aside the problem of how to find a convenient candidate solution. This seems to be a tough problem and we postpone it for future research. |
|
|
|
Elicitability of set-valued functionals |
Jana Hlavinová |
Estimating different risk measures, such as Value at Risk or Expected Shortfall, is a common task in various financial institutions. Assessing the quality of different estimation procedures via comparative backtests crucially hinges on the notion of elicitability. A risk measure is called elicitable if there is a strictly consistent scoring function for it. That is, a function S(x,y) of a forecast x and an observation y, such that its expectation with respect to y is uniquely minimized in x by the correct forecast. Recently, financial mathematics has witnessed a shift towards set-valued measures of risk, in particular when it comes to financial systems and portfolios. The question therefore arises what notion of elicitability is appropriate when evaluating set-valued risk measures, or set-valued functionals in general. We introduce two modes of elicitability: an exhaustive and a selective one. While in the exhaustive mode, the forecasts aim to specify the whole set, in the selective mode one is content with reporting any point within the set. We study the structural relations between these two modes and their properties, notably establishing a mutual exclusivity result between the two notions. Finally, we study several relevant examples of set-valued functionals and answer the question whether and in which sense they are elicitable. |
|
Couthor(s): Tobias Fissler, Rafael Frongillo, Birgit Rudloff |
|
Properties of the Cone of Non-negative Polynomials and Duality |
Jakub Hrdina |
Polynomial optimization problems are problems of optimizing a polynomial over a feasible set defined by polynomial inequalities. It encompasses many problems within various fields of mathematics, such as combinatorial optimization, mixed-integer linear programming, partial differential inequalities, and more. Multivariate polynomial optimization problems are typically transformed into problems over the convex cone of non-negative polynomials. Restricting ourselves to polynomials of order at most \( d\) and \( K\) to a semi-algebraic set, we study the geometric and topological properties of a cone of non-negative polynomials on a given basic semi-algebraic set K and the respective dual cone. |
|
|
|
Stochastic Optimization Approach to Data Envelopment Analysis |
Michal Houda |
We present a stochastic optimization approach, in particular chance-constrained optimization, for a problem of evaluating the productivity of decision-making units by data envelopment analysis (DEA). Inputs and outputs of these units are considered random, described by their probability distribution. Furthermore, we deal with several DEA models and with specific types of dependency between inputs and/or outputs. We provide a deterministic approximation of the problem, report several properties of the model and present a simple numerical example. |
|
|
|
The Static Vaccination Game |
Miroslava Jánošová |
The paper aims to inspect individuals' decisions to vaccinate against the novel coronavirus using a game theory model by Geoffrey Heal and Howard Kunreuther (2005) and examine the possibility of re-infection despite vaccination which may considerably change the results of the model as well as their decisions. The model itself is a static one, where individuals choose if they want to vaccinate. Due to the nature of the vaccination, we will assume that agents are the vaccine recipients, who have to consider the side effects of the specific vaccine used themselves, the probability of being infected, and the degree of the infection risk. It studies their behavior depending on the behavior of other agents under the assumption of equally informed players. In addition to the probabilities of infection, the economic costs of vaccination and treatments will be in the model. Although we will look at the game from the perspective of the utility-maximizing individual, it will be affected by the whole population and its risk perception of the vaccination. Another goal is to set Nash’s equilibrium and hence to identify the strategy of this game. |
|
Couthor(s): Veronika Miťková |
|
Implementation of the child factor in the old-age pension system |
Tatiana Jašurková |
Due to the unfavorable demographic situation, marked by an aging population and low fertility rates, countries with established pay-as-you-go pension systems face a problem with the sustainability of such defined systems. Since pension systems with static parameters do not reflect the development of exogenous variables on which the system is financially dependent, it is not possible to expect the stability of such systems. This creates the need to modify pay-as-you-go pension systems to ensure equivalent and long-term sustainable pension systems. The same applies to the pension system of the Slovak Republic. Recent legislative changes in the old-age pension system introduce the principle of fairness in the care of children, i.e. if someone does not work because he is taking care of a child, it must not have a negative impact on his pension. It also incorporates the possibility of intergenerational solidarity from children to their parents. In that matter, the new constitutional law of the old-age pension system in Slovakia introduces the idea of a parental bonus into the pension system. Therefore, in this paper we will focus on the study of the connection between the pension system and fertility, and the possibilities of adjusting the pension model in favor of fertility growth, taking into account the aforementioned parental bonus. |
|
|
|
Efficiency effects of mergers: harmonising merged production |
Richard Kališ |
The model of potential gains from mergers provides useful decomposition into technical, size and harmony effect. While the technical and size effects are well-elaborated, the interpretation of harmony effect remains open. We provide an analytic insight into the above mentioned decomposition. We express the harmony effect as a function of the relative difference of structures of the firms and of the relative difference in their size. These factors can play an important role and in some cases can even outweigh potential negative outcome of merger due to decreasing returns to scale. Furthermore, we showed that the sign of the harmonyeffect is not dependent on the specific form of the production function, but on its shape. We can prove that for the case of concave production function the harmony effect contribute in positive sense to the gains from merges. In the case of multiple inputs and multiple outputs the potential effects are illustrated for the Slovak hospital sector. This application provides detailed look into reallocation process. |
|
Couthor(s): Mikuláš Luptáčik, Daniel Dujava |
|
Hamilton-Jacobi-Bellman equation in stochastic dynamic portfolio optimization |
Soňa Kilianová |
For a problem of expected terminal utility maximization, motivated by optimizing a financial portfolio over multiple time periods, we formulate a Hamilton-Jacobi-Bellman nonlinear parabolic partial differential equation. By means of a Riccati-type of transformation, we transform it to a quasilinear partial differential equation, which is easier to solve. As a byproduct of the transformation, we obtain an auxiliary parametric quadratic optimization problem, the value function of which enters the quasilinear PDE. We discuss some interesting properties of this value function. We also propose a numerical scheme based on finite volumes, which we utilized for solving the quasilinear partial differential equation. We present results of a DAX30 Index optimization as an example. |
|
Couthor(s): Daniel Ševčovič, Mária Trnovská |
|
Robustness of stochastic programs with endogenous randomness via contamination |
Miloš Kopa |
Investigating stability of stochastic programs with respect to changes in the underlying probability distributions represents an important step before deploying any model to production. Often, the uncertainty in stochastic programs is not perfectly known, thus it is approximated. The stochastic distribution's misspecification and approximation errors can affect model solution, consequently leading to suboptimal decisions. It is of utmost importance to be aware of such errors and to have an estimate of their influence on the model solution. One approach, which estimates the possible impact of such errors, is the contamination technique. The methodology studies the effect of perturbation in the probability distribution by some contaminating distribution on the optimal value of stochastic programs. Lower and upper bounds, for the optimal values of perturbed stochastic programs, have been developed for numerous types of stochastic programs with exogenous randomness. In this paper, we first extend the current results by developing a tighter lower bound applicable to wider range of problems. Thereafter, we define contamination for decision-dependent randomness stochastic programs and prove various lower and upper bounds. Finally, we illustrate the contamination results on a real example of a stochastic program with endogenous randomness from a financial industry. |
|
Couthor(s): Tomáš Rusý |
|
Acceptability Maximization |
Gabriela Kováčová |
In this work we study the optimal investment problem, where a coherent acceptability index (CAI) is used to measure the portfolio performance. We call this problem the acceptability maximization. First, we study the one-period (static) case, and propose a numerical algorithm that approximates the original problem by a sequence of risk minimization problems. In the second part, we investigate the acceptability maximization in a discrete-time dynamic setup. Using robust representations of dynamic CAIs in terms of a family of dynamic coherent risk measures (DCRMs), we establish an intriguing dichotomy: if the corresponding family of DCRMs is recursive (i.e. strongly time consistent) and assuming some recursive structure of the market model, then the acceptability maximization problem reduces to just a one-period problem and the maximal acceptability is constant across all states and times. On the other hand, if the family of DCRMs is not recursive, which is often the case, then the acceptability maximization problem ordinarily is a time-inconsistent stochastic control problem. For two particular dynamic CAIs - the dynamic risk-adjusted return on capital and the dynamic gain-to-loss ratio - we overcome this issue by considering related bi-objective problems and applying the set-valued Bellman principle. |
|
Couthor(s): Birgit Rudloff, Igor Cialenco |
|
Relaxation of a quadratic program to a linear program |
Petr Lachout |
Optimization programs are often compared and related by their optimal values and solutions. Having an optimization program we are seeking for another one which is more easier for analytic or numerical solving and giving the same or reasonably approximated optimal value and optimal solutions of the original program. We will attempt to relax a quadratic program to a linear one with a discussion on their relation. |
|
|
|
Robust efficiency analysis of Czech and Slovak universities |
Markéta Matulová |
As the number of universities, study programs, and their capacities has rapidly grown during the last twenty years, the availability of tertiary education in society has increased highly. Thus, a question about the performance of individual institutions in the university education market arises. Activities of universities are usually subsidized using public funds, typically based on the number of students. However, the efficiency in providing educational services is not automatically guaranteed. So, it is helpful to measure, evaluate, and benchmark the performance of educational institutions. Higher efficiency in this area can lead to the more appropriate use of public money. The aim of this study was to evaluate the efficiency of Czech and Slovak universities and identify the factors influencing efficiency. The analysis was carried out on a sample comprising the data of nearly 50 universities in the Czech Republic and Slovakia during the years 2011–2016. The data were analysed through a two-stage procedure, where the Data Envelopment Analysis (DEA) was used in the first stage to evaluate the efficiency. The input variables included total academic staff and personnel expenditures; outputs comprised the total number of students and graduates, international mobilities, and research and development performance indicators. The efficiency scores were computed for all the examined institutions each year using several DEA models, including the robust approach. In the second stage of the analysis, DEA scores were explained by a set of operational, socio-economic, and demographic explanatory variables to find determinants of efficiency. The results of our analysis help to identify the operating conditions that are the most important to achieving good performance and can be used as a guide for public policy. |
|
Couthor(s): Hana Fitzová |
|
Modelling the sustainability and adequacy of the pension system in the perspective of Slovak population ageing |
Peter Martiška |
Is the current setting of the pension system sustainable? Will the entitlements granted adequately replace work income? The ability to adjust the pension system to fulfill the conditions of sustainability and adequacy will greatly depend on the demographic developement in Slovakia. In the upcoming decades, Slovakia will be one of the fastest ageing countries in the EU. Main reasons for it are low fertility and retirement of strong age cohorts in the upcomeing years. Using a microsimulation model (SLAMM) we will model the development of pension revenues and expenditures. Currently the model is being used to model the labour market in Slovakia, while we extend it to the modeling of the pension system. Currently a different model, SLOPEM, is being used by the Institure of Financial Policy and by the Council for Budget Responsibility for pension projections. Both of there institutions use different assumptions in the model, but come to similar conclusions. The pension system is unsustainable in the long-term, where pension expenditure is set to increase from 4,3 % of GDP (CBR result) to 4,7 % of GDP (IFP result). System will remain adequate, where the benefit ratio is set to remain relatively stable at around 40 %. Due to the uncertainty of long-term projections it is necessary to model also sensitivity scenarios. The most benefficial one, from the sustainability standpoint, is the reintroduction of rising retirement age with life expectancy. |
|
Couthor(s): Miroslav Štefánik |
|
Different ways of using second pillar savings in Slovakia |
Igor Melicherčík |
Recently, first pensions have been paid from the savings corresponding to the second pillar of the pension system in Slovakia. The talk emphasizes that the advantage of the second pillar cannot be assessed solely using the level of pensions paid. Its strength lies in the number of alternatives it offers. Based on calculations, we have analysed benefits of various alternatives in different circumstances. At present, the need for a long-term care in the case of dependency is a common problem. Most pensioners do not have means to cover the associated costs. We present our own model of the long-term care insurance as well as the replacement rate it could provide. |
|
Couthor(s): Gábor Szűcs |
|
Optimal share of technical education in labor |
Eduard Nežinský |
Given the structure of the Slovak economy, the labor market features a long-lasting mismatch between the supply and demand for technically educated labor as a production factor. Applying the idea of Luptáčik and Bohm (2010), we determine the optimal use of non-technically and technically educated labor across industries. Results could be relevant for educational policy design. For individual industries, best possible output as well as input use are determined. In the course of optimization, economic interdependencies are accounted for by the linear input-output model acting as a constraint along with the total primary factors endowment. The solutions enter subsequently a DEA model to generate the frontier of multi-output multi-input technology. |
|
Couthor(s): Hudcovský Martin, Morvay Karol |
|
Spectral gap optimization using graph bridging |
Soňa Pavlíková |
The smallest positive eigenvalue of a graph which represents a molecule, for example hydro-carbon, plays an important role in quantum chemistry, it represents minimal binding energy of a molecule. Energy of the highest occupied molecular orbital (HOMO) corresponds to the smallest positive eigenvalue \( \lambda^+_1\) . Energy of the lowest unoccupied molecular orbital (LUMO) corresponds to the largest negative eigenvalue \( \lambda^-_1\) . Very important value in chemistry is so called HOMO - LUMO separation gap, which is a multiple the difference \( \lambda^+_1\) - \( \lambda^-_1\) . In our contribution we analyze the HOMO - LUMO spectral gap of a weighted graph. We construct a new graph by bridging two weighted graphs over a bipartite graph. The aim is to maximize the spectral gap with respect to a bridging graph. |
|
Couthor(s): Daniel Ševčovič |
|
Regressions with Kernel Functions |
Alois Pichler |
We reconstruct functions, which are observed with a superimposed error at random locations. We address the problem in reproducing kernel Hilbert spaces. It is demonstrated that the corresponding sample average estimator, which is often derived by employing Gaussian random fields, converges in mean norm to the predictor, the conditional expectation. |
|
|
|
Pension scheme fees and costs: What makes pension schemes cheaper? |
Maria Siranova |
We calculate the expensiveness of 51 pension schemes via charge ratios (CRs) - a percentage difference between accumulated savings and savings without any fees and costs. With a set of \( 40\) potential drivers, we explain the variation across CRs by Bayesian model averaging framework. We find that cheaper pension schemes are associated with existence of performance fees, with no assets under management (AUM) fees, and are prevalent in more developed economies. Our findings therefore support increasing role of performance and contribution fees while decreasing the role of AUM fees, and reduction in overall administrative burden and institutional costs for funds. |
|
Couthor(s): Marek Radvansky, Katarina Lucivjanska, Stefan Lyocsa |
|
Parental bonus in Slovak pension system – fiscal and redistributive impacts |
Ján Šebo |
Slovak pension system is due to the major reform in 2021 where one of the flag ships is the parental bonus introduction. Several countries have experimented with the fertility driven policies within the pension system. Some of them has introduced bonus/malus system on pension contributions and some has introduced contribution sharing to mitigate the risk of lower pensions for child-caring persons during their economic life. Slovakia has decided to interconnect the system of contributors and pension beneficiaries directly by redirecting part of the paid pension contributions on the account of pensioner as a tool of intergenerational altruism and transfers. The paper presents some initial insights on how the parental bonus is created and what fiscal and redistributive impacts can be expected. Using the microsimulation model of pension system, we extend the model by adding intergenerational connections among individuals and study the effects of the parental bonus on overall fiscal stability of PAYG scheme as well as the distribution of retirement income of retirees before and after the bonus is introduced. |
|
Couthor(s): Daniela Dankova, Ivan Kralik |
|
Nonlinearities in financial modelling |
Daniel Ševčovič |
This survey talk is focused on qualitative and numerical analyses of fully nonlinear partial differential equations of parabolic type arising in financial mathematics. The main purpose is to review various non-linear extensions of the classical Black-Scholes theory for pricing financial instruments, as well as models of stochastic dynamic portfolio optimization leading to the Hamilton-Jacobi-Bellman (HJB) equation. After suitable transformations, both problems can be represented by solutions to nonlinear parabolic equations. Qualitative analysis will be focused on issues concerning the existence and uniqueness of solutions. In the numerical part we discuss a stable finite-volume and finite difference schemes for solving fully nonlinear parabolic equations. |
|
|
|
Modelling foreign labour inflows using a dynamic microsimulation model of an ageing country - Slovakia |
Miroslav Štefánik |
In this paper, we introduce our approach adopted to project the development of available labour supply. We focus on the situation at the Slovak labour market, where the increased inflow of foreign labour between 2012 and 2019 was driven by the demand for labour arising because of a need for replacing workers leaving the labour market. Using data from the Labour Force Survey, SLAMM simulates the development of domestic labour supply by reproducing the demographic processes, educational attainment, together with decisions on economic activity and employment. The discrepancy between the domestic labour supply and structure of employment expected by a macroeconomic forecast drives the projected inflow of foreign workers. The deficit of labour, driven mainly by the demographic change, is expected to be balanced by labour immigration. Besides its potential in predicting future labour supply, we also show some of the policy-relevant scenarios simulated by SLAMM. (Supported by APVV-17-0329 and VEGA 2/0150/21) |
|
Couthor(s): Tomáš Miklošovič |
|
Some features of non-radial graph models in Data Envelopment Analysis |
Mária Trnovská |
Data Envelopment Analysis defines two basic classes of models for assessing the performance of decision-making units: radial and non-radial. Radial models assume proportional change of inputs or outputs, where residual input excesses and output shortfalls are not directly accounted for in the measure of inefficiency. In contrast, non-radial models allow individual change each input and output component individually and independently, and integrate input excesses and output shortfalls in their entirety into an overall efficiency or inefficiency measure. Due to this feature, non-radial models gained in popularity and their properties were analysed and often used in practice. In this contribution, we present a general scheme for non-radial models and use it to analyse some of their properties. |
|
Couthor(s): Margaréta Halická |
|
Application of maximal monotone operator method for solving Hamilton-Jacobi-Bellman equation arising from optimal portfolio selection problem |
Cyril Izuchukwu Udeani |
In this paper, we investigate a fully nonlinear evolutionary Hamilton-Jacobi-Bellman(HJB) parabolic equation utilizing the monotone operator technique. We consider the HJB equation arising from portfolio optimization selection, where the goal is to maximize the conditional expected value of the terminal utility of the portfolio. The fully nonlinear HJB equation is transformed into a quasilinear parabolic equation using the so-called Riccati transformation method. The transformed parabolic equation can be viewed as the porous media type of equation with the source term. Under some assumptions, we obtain that the diffusion function to the quasilinear parabolic equation is globally Lipschitz continuous, which is a crucial requirement for solving the Cauchy problem. We employ Banach’s fixed point theorem to obtain the existence and uniqueness of a solution to the general form of the transformed parabolic equation in a suitable Sobolev space in an abstract setting. Some financial applications of the proposed result are presented in one-dimensional space. |
|
Couthor(s): Daniel Sevcovic |
|
Optimization of the current value of a multi-objective loss function and a set-valued Hamilton-Jacobi-Bellman equation |
Daniela Visetti |
Two features are often present in economic problems. One is that the optimization required involves more than one conflicting objective function. Such multi-dimensional objective values are ordered via convex cones whereas a scalarization via linear functions is, of course, a very strong simplification of the problem. A set-valued approach to the multicriteria case was proposed in the classical framework in the paper by Hamel and Visetti, The value functions approach and Hopf-Lax formula for multiobjective costs via set optimization, Journal of Mathematical Analysis and Applications v. 483 (2020). The second feature is a discount factor which permits to determine the current value of the Lagrangian. This was done for the real-valued case in the paper by Rincón-Zapatero, Hopf-Lax formula for variational problems with non-constant discount, Journal of Geometric Mechanics v. 1 (2009). Both targets are aimed at here: a mathematical model that includes the discount factor and the multi-objective nature of the problem is presented. The proposed approach consists of two steps: first, extend the problem to one with values in a complete lattice of sets and secondly, apply concepts and tools from set optimization theory as surveyed in Hamel, Heyde, Löhne, Rudloff, Schrage, Set Optimization and Applications - The State of the Art, Springer 2015. The minimization of the multiobjective Lagrangian with non-constant discount in the complete lattice approach leads to a new definition of the value function. Bellman's optimality principle and Hopf-Lax formula are derived. The value function is shown to be a solution of a set-valued Hamilton-Jacobi-Bellman equation. |
|
|