A graph G is a multi-interval PCG if there exist an edge weighted tree T with non-negative real values and disjoint intervals of the non-negative real half-line such that each node of G is uniquely associated to a leaf of T and there is an edge between two nodes in G if and only if the weighted distance between their corresponding leaves in T lies within any such intervals. If the number of intervals is k, then we call the graph a k-interval-PCG; in symbols, G = k-interval-PCG (T, I1, . . . , Ik). It is known that 2-interval-PCGs do not contain all graphs and the smallest known graph outside this class has 135 nodes. Here we prove that all graphs with at most 8 nodes are 2-interval-PCGs, so doing one step towards the determination of the smallest value of n such that there exists an n node graph that is not a 2-interval-PCG.
A network can contain numerous spanning trees. If two spanning trees $T_i,T_j$ do not share any common edges, $T_i$ and $T_j$ are said to be pairwisely edge-disjoint. For spanning trees $T_1, T_2, ..., T_m$, if every two of them are pairwisely edge-disjoint, they are called completely edge-independent spanning trees (CEISTs for short). CEISTs can facilitate many network functionalities, and constructing CEISTs as maximally allowed as possible in a given network is a worthy undertaking. In this paper, we establish the maximal number of CEISTs in the locally twisted cube network, and propose an algorithm to construct $\lfloor \frac{n}{2} \rfloor$ CEISTs in $LTQ_n$, the $n$-dimensional locally twisted cube. The proposed algorithm has been actually implemented, and we present the outputs. Network broadcasting in the $LTQ_n$ was simulated using $\lfloor\frac{n}{2}\rfloor$ CEISTs, and the performance compared with broadcasting using a single tree.
In the paper we argue that performance of the classifiers based on Empirical Risk Minimization (ERM) for positive unlabeled data, which are designed for case-control sampling scheme may significantly deteriorate when applied to a single-sample scenario. We reveal why their behavior depends, in all but very specific cases, on the scenario. Also, we introduce a single-sample case analogue of the popular non-negative risk classifier designed for case-control data and compare its performance with the original proposal. We show that the significant differences occur between them, especiall when half or more positive of observations are labeled. The opposite case when ERM minimizer designed for the case-control case is applied for single-sample data is also considered and similar conclusions are drawn. Taking into account difference of scenarios requires a sole, but crucial, change in the definition of the Empirical Risk.
Given a set of disks in the plane, the goal of the problem studied in this paper is to choose a subset of these disks such that none of its members contains the centre of any other. Each disk not in this subset must be merged with one of its nearby disks that is, increasing the latter's radius. This problem has applications in labelling rotating maps and in visualizing the distribution of entities in static maps. We prove that this problem is NP-hard. We also present an ILP formulation for this problem, and a polynomial-time algorithm for the special case in which the centres of all disks are on a line.