Categories
Uncategorized

Sensory Area Task (Take a seat): A whole new Conduct

The common Lyapunov purpose technique is used to restrict haphazard changing issues within the technique, as well as the actuator compensation method is unveiled in deal with the disappointment problems and bias defects within the actuators. Simply by incorporating the backstepping DSC design strategy and fractional-order steadiness theory, a singular NN flexible changing FTC buy Cyclosporin A algorithm will be Buffy Coat Concentrate proposed. Within the operation in the proposed criteria, the stability as well as management General Equipment overall performance with the fractional-order systems can be assured. Ultimately, any simulation instance of a permanent magnets synchronous electric motor (PMSM) system reveals the actual feasibility along with performance from the created plan.With this document, we propose a good end-to-end strong learning structure, referenced since MCG-Net, developing convolutional sensory circle (Nbc) using transformer-based worldwide framework block with regard to fine-grained delineation along with analysis group of four years old heart failure occasions via magnetocardiogram (Micrograms) files, namely Q-, R-, S- and T-waves. MCG-Net takes advantage of a multi-resolution CNN backbone as well as the state-of-the-art (SOTA) transformer encoders that facilitate global temporal feature aggregation. Besides the novel network architecture, we introduce a multi-task learning scheme to achieve simultaneous delineation and classification. Specifically, the problem of MCG delineation is formulated as multi-class heatmap regression. Meanwhile, a binary diagnostic classification label as well as a duration are jointly estimated for each cardiac event using features that are temporally aligned by event heatmaps. The framework is evaluated on a clinical MCG dataset, containing data collected from 270 subjects with cardiac anomalies and 108 control subjects. We designed and conducted a two-fold cross-validation study to validate the proposed method and to compare its performance with the SOTA methods. Experimental results demonstrated that our method outperformed counterparts on both event delineation and diagnostic classification tasks, achieving respectively an average ECG-F1 of 0.987 and an average Event-F1 of 0.975 for MCG delineation, and an average accuracy of 0.870, an average sensitivity of 0.732, an average specificity of 0.914 and an average AUC of 0.903 for diagnostic classification. Comprehensive ablation experiments are additionally performed to investigate effectiveness of different network components.This article presents a fixed-time (FxT) system identifier for continuous-time nonlinear systems. A novel adaptive update law with discontinuous gradient flows of the identification errors is presented, which leverages concurrent learning (CL) to guarantee the learning of uncertain nonlinear dynamics in a fixed time, as opposed to asymptotic or exponential time. More specifically, the CL approach retrieves a batch of samples stored in a memory, and the update law simultaneously minimizes the identification error for the current stream of samples and past memory samples. Rigorous analyses are provided based on FxT Lyapunov stability to certify FxT convergence to the stable equilibria of the gradient descent flow of the system identification error under easy-to-verify rank conditions. The performance of the proposed method in comparison with the existing methods is illustrated in the simulation results.Random features approach has been widely used for kernel approximation in large-scale machine learning. A number of recent studies have explored data-dependent sampling of features, modifying the stochastic oracle from which random features are sampled. While proposed techniques in this realm improve the approximation, their suitability is often verified on a single learning task. In this article, we propose a task-specific scoring rule for selecting random features, which can be employed for different applications with some adjustments. We restrict our attention to canonical correlation analysis (CCA) and provide a novel, principled guide for finding the score function maximizing the canonical correlations. We prove that this method, called optimal randomized CCA (ORCCA), can outperform (in expectation) the corresponding kernel CCA with a default kernel. Numerical experiments verify that ORCCA is significantly superior to other approximation techniques in the CCA task.Contextual bandit is a popular sequential decision-making framework to balance the exploration and exploitation tradeoff in many applications such as recommender systems, search engines, etc. Motivated by two important factors in real-world applications 1) latent contexts (or features) often exist and 2) feedbacks often have humans in the loop leading to human biases, we formulate a generalized contextual bandit framework with latent contexts. Our proposed framework includes a two-layer probabilistic interpretable model for the feedbacks from human with latent features. We design a GCL-PS algorithm for the proposed framework, which utilizes posterior sampling to balance the exploration and exploitation tradeoff. We prove a sublinear regret upper bound for GCL-PS, and prove a lower bound for the proposed bandit framework revealing insights on the optimality of GCL-PS. To further improve the computational efficiency of GCL-PS, we propose a Markov Chain Monte Carlo (MCMC) algorithm to generate approximate samples, resulting in our GCL-PSMC algorithm. We not only prove a sublinear Bayesian regret upper bound for our GCL-PSMC algorithm, but also reveal insights into the tradeoff between computational efficiency and sequential decision accuracy. Finally, we apply the proposed framework to hotel recommendations and news article recommendations, and show its superior performance over a variety of baselines via experiments on two public datasets.Fine-grained visual categorization (FGVC) relies on hierarchical features extracted by deep convolutional neural networks (CNNs) to recognize closely alike objects. Particularly, shallow layer features containing rich spatial details are vital for specifying subtle differences between objects but are usually inadequately optimized due to gradient vanishing during backpropagation. In this article, hierarchical self-distillation (HSD) is introduced to generate well-optimized CNNs features for accurate fine-grained categorization. HSD inherits from the widely applied deep supervision and implements multiple intermediate losses for reinforced gradients. Besides that, we observe that the hard (one-hot) labels adopted for intermediate supervision hurt the performance of FGVC by enforcing overstrict supervision. As a solution, HSD seeks self-distillation where soft predictions generated by deeper layers of the network are hierarchically exploited to supervise shallow parts. Moreover, self-information entropy loss (SIELoss) is designed in HSD to adaptively soften intermediate predictions and facilitate better convergence. In addition, the gradient detached fusion (GDF) module is incorporated to produce an ensemble result with multiscale features via effective feature fusion. Extensive experiments on four challenging fine-grained datasets show that, with neglectable parameter increase, the proposed HSD framework and the GDF module both bring significant performance gains over different backbones, which also achieves state-of-the-art classification performance.Communication and computation resources are normally limited in remote/networked control systems, and thus, saving either of them could substantially contribute to cost reduction and life-span increasing as well as reliability enhancement for such systems. This article investigates the event-triggered control method to save both communication and computation resources for a class of uncertain nonlinear systems in the presence of actuator failures and full-state constraints. By introducing the triggering mechanisms for actuation updating and parameter adaptation, and with the aid of the unified constraining functions, a neuroadaptive and fault-tolerant event-triggered control scheme is developed with several salient features 1) online computation and communication resources are substantially reduced due to the utilization of unsynchronized (uncorrelated) event-triggering pace for control updating and parameter adaptation; 2) systems with and without constraints can be addressed uniformly without involving feasibility conditions on virtual controllers; and 3) the output tracking error converges to a prescribed precision region in the presence of actuation faults and state constraints. Both theoretical analysis and numerical simulation verify the benefits and efficiency of the proposed method.This letter summarizes and proves the concept of bounded-input bounded-state (BIBS) stability for weight convergence of a broad family of in-parameter-linear nonlinear neural architectures (IPLNAs) as it generally applies to a broad family of incremental gradient learning algorithms. A practical BIBS convergence condition results from the derived proofs for every individual learning point or batches for real-time applications.Unsupervised domain adaptation (UDA) has attracted increasing attention in recent years, which adapts classifiers to an unlabeled target domain by exploiting a labeled source domain. To reduce the discrepancy between source and target domains, adversarial learning methods are typically selected to seek domain-invariant representations by confusing the domain discriminator. However, classifiers may not be well adapted to such a domain-invariant representation space, as the sample- and class-level data structures could be distorted during adversarial learning. In this article, we propose a novel transferable feature learning approach on graphs (TFLG) for unsupervised adversarial domain adaptation (DA), which jointly incorporates sample- and class-level structure information across two domains. TFLG first constructs graphs for minibatch samples and identifies the classwise correspondence across domains. A novel cross-domain graph convolutional operation is designed to jointly align the sample- and class-level structures in two domains. Moreover, a memory bank is designed to further exploit the class-level information. Extensive experiments on benchmark datasets demonstrate the effectiveness of our approach compared to the state-of-the-art UDA methods.Vision-and-language navigation (VLN) is a challenging task that requires an agent to navigate in real-world environments by understanding natural language instructions and visual information received in real time. Prior works have implemented VLN tasks on continuous environments or physical robots, all of which use a fixed-camera configuration due to the limitations of datasets, such as 1.5-m height, 90° horizontal field of view (HFOV), and so on. However, real-life robots with different purposes have multiple camera configurations, and the huge gap in visual information makes it difficult to directly transfer the learned navigation skills between various robots. In this brief, we propose a visual perception generalization strategy based on meta-learning, which enables the agent to fast adapt to a new camera configuration. In the training phase, we first locate the generalization problem to the visual perception module and then compare two meta-learning algorithms for better generalization in seen and unseen environments. One of them uses the model-agnostic meta-learning (MAML) algorithm that requires few-shot adaptation, and the other refers to a metric-based meta-learning method with a feature-wise affine transformation (AT) layer. The experimental results on the VLN-CE dataset demonstrate that our strategy successfully adapts the learned navigation skills to new camera configurations, and the two algorithms show their advantages in seen and unseen environments respectively.G protein-coupled receptors (GPCRs) account for about 40% to 50% of drug targets. Many human diseases are related to G protein coupled receptors. Accurate prediction of GPCR interaction is not only essential to understand its structural role, but also helps design more effective drugs. At present, the prediction of GPCR interaction mainly uses machine learning methods. Machine learning methods generally require a large number of independent and identically distributed samples to achieve good results. However, the number of available GPCR samples that have been marked is scarce. Transfer learning has a strong advantage in dealing with such small sample problems. Therefore, this paper proposes a transfer learning method based on sample similarity, using XGBoost as a weak classifier and using the TrAdaBoost algorithm based on JS divergence for data weight initialization to transfer samples to construct a data set. After that, the deep neural network based on the attention mechanism is used for model training. The existing GPCR is used for prediction. In short-distance contact prediction,The accuracy of our method is 0.26 higher than similar methods.Sequence alignment is an essential step in computational genomics. More accurate and efficient sequence pre-alignment methods that run before conducting expensive computation for final verification are still urgently needed. In this article, we propose a more accurate and efficient pre-alignment algorithm for sequence alignment, called DiagAF. Firstly, DiagAF uses a new lower bound of edit distance based on shift hamming masks. The new lower bound makes use of fewer shift hamming masks comparing with state-of-art algorithms such as SHD and MAGNET. Moreover, it takes account the information of edit distance path exchanging on shift hamming masks. Secondly, DiagAF can deal with alignments of sequence pairs with not equal length, rather than state-of-art methods just for equal length. Thirdly, DiagAF can align sequences with early termination for true alignments. In the experiment, we compared DiagAF with state-of-art methods. DiagAF can achieve a much smaller error rate than them, meanwhile use less time than them. We believe that DiagAF algorithm can further improve the performance of state-of-art sequence alignment softwares. The source codes of DiagAF can be downloaded from web site https//github.com/BioLab-cz/DiagAF.Data visualizations have been increasingly used in oral presentations to communicate data patterns to the general public. Clear verbal introductions of visualizations to explain how to interpret the visually encoded information are essential to convey the takeaways and avoid misunderstandings. We contribute a series of studies to investigate how to effectively introduce visualizations to the audience with varying degrees of visualization literacy. We begin with understanding how people are introducing visualizations. We crowdsource 110 introductions of visualizations and categorize them based on their content and structures. From these crowdsourced introductions, we identify different introduction strategies and generate a set of introductions for evaluation. We conduct experiments to systematically compare the effectiveness of different introduction strategies across four visualizations with 1,080 participants. We find that introductions explaining visual encodings with concrete examples are the most effective. Our study provides both qualitative and quantitative insights into how to construct effective verbal introductions of visualizations in presentations, inspiring further research in data storytelling.We present a novel approach for volume exploration that is versatile yet effective in isolating semantic structures in both noisy and clean data. Specifically, we describe a hierarchical active contours approach based on Bhattacharyya gradient flow which is easier to control, robust to noise, and can incorporate various types of statistical information to drive an edge-agnostic exploration process. To facilitate a time-bound user-driven volume exploration process that is applicable to a wide variety of data sources, we present an efficient multi-GPU implementation that (1) is approximately 400 times faster than a single thread CPU implementation, (2) allows hierarchical exploration of 2D and 3D images, (3) supports customization through multidimensional attribute spaces, and (4) is applicable to a variety of data sources and semantic structures. The exploration system follows a 2-step process. It first applies active contours to isolate semantically meaningful subsets of the volume. It then applies transfer functions to the isolated regions locally to produce clear and clutter-free visualizations. We show the effectiveness of our approach in isolating and visualizing structures-of-interest without needing any specialized segmentation methods on a variety of data sources, including 3D optical microscopy, multi-channel optical volumes, abdominal and chest CT, micro-CT, MRI, simulation, and synthetic data. We also gathered feedback from a medical trainee regarding the usefulness of our approach and discussion on potential applications in clinical workflows.Fine-grained visual recognition is to classify objects with visually similar appearances into subcategories, which has made great progress with the development of deep CNNs. However, handling subtle differences between different subcategories still remains a challenge. In this paper, we propose to solve this issue in one unified framework from two aspects, i.e., constructing feature-level interrelationships, and capturing part-level discriminative features. This framework, namely PArt-guided Relational Transformers (PART), is proposed to learn the discriminative part features with an automatic part discovery module, and to explore the intrinsic correlations with a feature transformation module by adapting the Transformer models from the field of natural language processing. The part discovery module efficiently discovers the discriminative regions which are highly-corresponded to the gradient descent procedure. Then the second feature transformation module builds correlations within the global embedding and multiple part embedding, enhancing spatial interactions among semantic pixels. Moreover, our proposed approach does not rely on additional part branches in the inference time and reaches state-of-the-art performance on 3 widely-used fine-grained object recognition benchmarks. Experimental results and explainable visualizations demonstrate the effectiveness of our proposed approach.Decoupling the sibling head has recently shown great potential in relieving the inherent task-misalignment problem in two-stage object detectors. However, existing works design similar structures for the classification and regression, ignoring task-specific characteristics and feature demands. Besides, the shared knowledge that may benefit the two branches is neglected, leading to potential excessive decoupling and semantic inconsistency. To address these two issues, we propose Heterogeneous task decoupling (HTD) framework for object detection, which utilizes a Progressive Graph (PGraph) module and a Border-aware Adaptation (BA) module for task-decoupling. Specifically, we first devise a Semantic Feature Aggregation (SFA) module to aggregate global semantics with image-level supervision, serving as the shared knowledge for the task-decoupled framework. Then, the PGraph module performs progressive graph reasoning, including local spatial aggregation and global semantic interaction, to enhance semantic representations of region proposals for classification. The proposed BA module integrates multi-level features adaptively, focusing on the low-level border activation to obtain representations with spatial and border perception for regression. Finally, we utilize the aggregated knowledge from SFA to keep the instance-level semantic consistency (ISC) of decoupled frameworks. Extensive experiments demonstrate that HTD outperforms existing detection works by a large margin, and achieves single-model 50.4%AP and 33.2% APs on COCO test-dev set using ResNet-101-DCN backbone, which is the best entry among state-of-the-arts under the same configuration. Our code is available at https//github.com/CityU-AIM-Group/HTD.Many state-of-the-art stereo matching algorithms based on deep learning have been proposed in recent years, which usually construct a cost volume and adopt cost filtering by a series of 3D convolutions. In essence, the possibility of all the disparities is exhaustively represented in the cost volume, and the estimated disparity holds the maximal possibility. The cost filtering could learn contextual information and reduce mismatches in ill-posed regions. However, this kind of methods has two main disadvantages 1) cost filtering is very time-consuming, and it is thus difficult to simultaneously satisfy the requirements for both speed and accuracy; 2) thickness of the cost volume determines the disparity range which can be estimated, and the pre-defined disparity range may not meet the demand of practical application. This paper proposes a novel real-time stereo matching method called RLStereo, which is based on reinforcement learning and abandons the cost volume or the routine of exhaustive search. The trained RLStereo makes only a few actions iteratively to search the value of the disparity for each pair of stereo images. Experimental results show the effectiveness of the proposed method, which achieves comparable performances to state-of-the-art algorithms with real-time speed on the public large-scale testset, i.e., Scene Flow.Knowledge of aneurysm geometry and local mechanical wall parameters using ultrasound (US) can contribute to a better prediction of rupture risk in abdominal aortic aneurysms (AAAs). However, aortic strain imaging using conventional US is limited by the lateral lumen-wall contrast and resolution. In this study, ultrafast multiperspective bistatic (MP BS) imaging is used to improve aortic US, in which two curved array transducers receive simultaneously on each transmit event. The advantage of such bistatic US imaging on both image quality and strain estimations was investigated by comparing it to single-perspective monostatic (SP MS) and MP monostatic (MP MS) imaging, i.e., alternately transmitting and receiving with either transducer. Experimental strain imaging was performed in US simulations and in an experimental study on porcine aortas. Different compounding strategies were tested to retrieve the most useful information from each received US signal. Finally, apart from the conventional sector grid in curved array US imaging, a polar grid with respect to the vessel’s local coordinate system is introduced. This new reconstruction method demonstrated improved displacement estimations in aortic US. The US simulations showed increased strain estimation accuracy using MP BS imaging bistatic imaging compared to MP MS imaging, with a decrease in the average relative error between 41% and 84% in vessel wall regions between transducers. In the experimental results, the mean image contrast-to-noise ratio was improved by up to 8 dB in the vessel wall regions between transducers. This resulted in an increased mean elastographic signal-to-noise ratio by about 15 dB in radial strain and 6 dB in circumferential strain.

Leave a Reply

Your email address will not be published. Required fields are marked *