About the Journal
Journal of Data-Driven Engineering Systems
ISSN: 1544-788X
E-ISSN: 2225-7860
Journal of Data-Driven Engineering Systems is an international scientific double-blind peer-reviewed, open-access journal on the science and technology of engineering published quarterly.
DDES is a research journal publishing original full-length research papers, reviews, and Letters to the Editor. The Journal is devoted to advancing and disseminating knowledge concerning data-driven system modeling, intelligent control system, structural health monitoring and optimization methods.
The Engineering and Systems Applications publishes original research findings as regular papers and review papers (by invitation). The Journal provides a platform for Engineers, Researchers, Academicians, and Practitioners who are highly motivated to contribute to the Engineering disciplines and Applied Sciences. It also welcomes contributions that address solutions to the developing world's specific challenges and address science and technology issues from a multidisciplinary perspective.
Subject areas suitable for publication are as follows:
-
Active Learning:
- Query strategy: Defines how to select the most informative instances for labeling in an active learning setting.
- Pool-based active learning: Involves selecting instances from an unlabeled pool of data for labeling.
- Stream-based active learning: Deals with scenarios where data arrives in a sequential stream, and the labeling decision needs to be made on-the-fly.
- Uncertainty sampling: A common query strategy that selects instances with high uncertainty for labeling.
- Committee-based methods: Involves maintaining multiple models or annotators and selecting instances based on disagreement among them.
- Bayesian active learning: Incorporates prior knowledge and uncertainty estimation using Bayesian inference to guide the selection of instances for labeling.
-
Anomaly Detection:
- Unsupervised anomaly detection: Identifies patterns that do not conform to expected behavior based on unlabeled data.
- Supervised anomaly detection: Trains a model on labeled data with both normal and anomalous instances to classify new instances.
- Semi-supervised anomaly detection: Utilizes a combination of labeled and unlabeled data to identify anomalies.
- Statistical methods: Use statistical models and techniques to detect anomalies based on deviations from expected distributions.
- Machine learning approaches: Utilize algorithms such as clustering, classification, or density estimation to detect anomalies.
- Deep learning-based anomaly detection: Leverages deep neural networks to learn complex patterns and identify anomalies in high-dimensional data.
-
Ant Colony Optimization:
- Inspired by the foraging behavior of ants to solve optimization problems.
- It uses a population of artificial ants to explore the solution space and build solutions incrementally.
- Utilizes pheromone trails to communicate and share information about good solutions among ants.
- Commonly applied to combinatorial optimization problems, such as the traveling salesman problem or the vehicle routing problem.
- Iteratively improves solutions by applying local heuristics and exploiting the accumulated knowledge in the pheromone trails.
-
Attention Mechanisms:
- Originally introduced in the context of natural language processing and neural machine translation.
- Allow neural networks to focus on relevant parts of the input or prioritize different input elements.
- Mechanisms such as self-attention or transformer architectures enable modeling relationships between different elements of the input.
- Enhance the performance of sequence-to-sequence tasks, language understanding, image captioning, and other tasks where capturing dependencies is crucial.
- Enable the network to assign different weights to different parts of the input, effectively learning where to focus its attention.
-
Autoencoders:
- Neural network models are designed to learn efficient representations of input data by compressing and then reconstructing it.
- Consist of an encoder and a decoder network, with a bottleneck layer that represents the compressed latent space.
- Used for tasks such as dimensionality reduction, data denoising, and anomaly detection.
- Variational Autoencoders (VAEs) introduce probabilistic modeling in autoencoders, allowing for generative capabilities.
- Sparse autoencoders incorporate sparsity constraints to encourage the model to learn more meaningful representations.
-
Bayesian Learning:
- It uses Bayesian inference to update beliefs and make predictions based on prior knowledge and observed data.
- Provides a framework for reasoning under uncertainty by modeling uncertainty as probabilities.
- Bayesian networks: Graphical models that represent probabilistic relationships among variables.
- Bayesian optimization: Sequential model-based optimization technique that uses Bayesian inference to find the optimal configuration of a costly black-box function.
- Bayesian neural networks: Neural networks that incorporate Bayesian inference to estimate uncertainties in predictions.
- Bayesian learning allows for principled decision-making, robustness to noise, and the ability to incorporate prior knowledge into the learning process.
-
Blockchain for Data Security:
- Utilizes distributed ledger technology to provide secure and transparent storage and management of data.
- Each data transaction is recorded in a block, which is linked to previous blocks using cryptographic hashes, ensuring immutability.
- Decentralized consensus mechanisms, such as proof-of-work or proof-of-stake, ensure trust and prevent unauthorized modifications.
- Enables secure data sharing among multiple parties without relying on a central authority.
- Applications include secure storage of medical records, supply chain management, and protection of intellectual property.
-
Cloud Computing:
- Delivers on-demand access to computing resources (e.g., servers, storage, databases) over the internet.
- Provides scalability, flexibility, and cost-efficiency by allowing users to pay for resources on a usage basis.
- Infrastructure as a Service (IaaS): Offers virtualized computing resources, such as virtual machines and storage.
- Platform as a Service (PaaS): Provides a platform for developing, deploying, and managing applications without the complexity of infrastructure management.
- Software as a Service (SaaS): Delivers software applications over the internet, accessible through web browsers or APIs.
-
Computer Vision:
- Concerned with enabling computers to understand and interpret visual information from images or videos.
- Involves tasks such as object detection, image classification, image segmentation, and facial recognition.
- Deep learning techniques, such as convolutional neural networks (CNNs), have achieved significant advancements in computer vision.
- Applications include autonomous vehicles, surveillance systems, medical imaging, augmented reality, and image-based search.
-
Constraint Satisfaction Problems (CSPs):
- Involves finding solutions that satisfy a set of constraints defined over a set of variables.
- Each variable has a domain of possible values, and constraints restrict the combinations of values for the variables.
- Backtracking algorithms, such as depth-first search, are commonly used to solve CSPs.
- Examples of CSPs include the map coloring problem, Sudoku, scheduling problems, and the Eight Queens problem.
-
Convex Optimization:
- Focuses on optimization problems with convex objective functions and convex constraints.
- Convex optimization problems have unique global optima, and local optima are also global.
- Efficient algorithms, such as interior point methods and gradient-based methods, can solve convex optimization problems.
- Widely applicable in various fields, including machine learning, signal processing, finance, and engineering design.
-
Data Privacy and Ethics:
- Concerned with protecting the privacy of individuals' data and addressing ethical considerations in data-driven technologies.
- Involves techniques such as data anonymization, differential privacy, and secure multiparty computation.
- Compliance with data protection regulations (e.g., GDPR, CCPA) and ethical guidelines is crucial.
- Balancing data utility and privacy is a key challenge, especially in scenarios where personal data is used for research or decision-making.
-
Deep Learning:
- Subset of machine learning that focuses on artificial neural networks with multiple layers.
- Learns hierarchical representations of data through multiple layers of abstraction.
- Deep neural networks can automatically learn features from raw data, reducing the need for manual feature engineering.
- Common architectures include convolutional neural networks (CNNs) for image analysis and recurrent neural networks (RNNs) for sequential data.
- Applications span various domains, including image and speech recognition, natural language processing, and autonomous systems.
-
Deep Reinforcement Learning:
- Combination of deep learning and reinforcement learning, where an agent learns to make sequential decisions in an environment.
- Utilizes neural networks to approximate the value function or policy of the agent.
- Reinforcement learning algorithms, such as Q-learning or policy gradients, are extended with deep neural networks as function approximators.
- Deep RL has achieved breakthroughs in complex tasks, such as playing games (e.g., AlphaGo, OpenAI Five) and robotics control.
-
Edge Computing:
- Computing paradigm that brings computation and data storage closer to the edge of the network, near the data source.
- Reduces latency, conserves bandwidth, and enhances privacy and security by processing data locally.
- Enables real-time processing and decision-making in applications like IoT, autonomous vehicles, and augmented reality.
- Combines edge devices, edge servers, and cloud services to create a distributed computing infrastructure.
-
Evolutionary Algorithms:
- Optimization techniques inspired by natural evolution and genetics.
- Population-based algorithms that iteratively evolve a population of candidate solutions.
- Apply genetic operators, such as selection, crossover, and mutation, to generate new candidate solutions.
- Evolutionary algorithms explore the solution space efficiently and are suitable for complex optimization problems without known gradients.
- Examples include genetic algorithms, genetic programming, and evolutionary strategies.
-
Explainable AI:
- Focuses on developing AI models and systems that can provide understandable explanations for their decisions and behaviors.
- Aims to address the "black box" nature of complex machine learning models.
- Interpretable models, rule-based systems, and visualization techniques are used to provide insights into the decision-making process.
- Explainable AI is crucial for domains where transparency, accountability, and regulatory compliance are essential, such as healthcare and finance.
-
Federated Learning:
- A distributed learning approach that trains models on decentralized data sources without centralizing the data.
- Enables privacy-preserving machine learning by keeping data locally on devices or servers and only sharing model updates.
- Models are trained collaboratively by aggregating updates from multiple devices or clients.
- Suitable for scenarios with data privacy concerns, such as mobile devices, edge computing, and IoT applications.
-
Fraud Detection:
- Involves identifying and preventing fraudulent activities or transactions.
- Uses machine learning and data mining techniques to analyze patterns, anomalies, and behavior to detect fraudulent behavior.
- Common methods include anomaly detection, supervised classification, network analysis, and rule-based systems.
- Fraud detection is applicable to areas such as finance, insurance, e-commerce, and cybersecurity.
-
Generative Adversarial Networks (GANs):
- Deep learning framework consists of two competing neural networks: a generator and a discriminator.
- The generator aims to generate synthetic data that resembles real data, while the discriminator tries to distinguish between real and fake data.
- Training GANs involve an adversarial process where the networks learn from each other through backpropagation.
- GANs have been successful in generating realistic images, videos, text, and other types of data.
-
Genetic Programming:
- Evolutionary computation technique that evolves computer programs to solve complex problems.
- Programs are represented as trees, with nodes representing operations or functions and leaves representing input variables or constants.
- Evolutionary operators, such as selection, crossover, and mutation, are applied to the program trees to generate new solutions.
- Genetic programming can automatically discover algorithms or mathematical expressions for a given problem.
-
Graph Analytics:
- Focuses on analyzing and extracting insights from structured data represented as graphs.
- Graphs consist of nodes (vertices) and edges (connections) that represent relationships or dependencies between data elements.
- Graph analytics algorithms include graph traversal, centrality measures, community detection, and graph clustering.
- Applications include social network analysis, recommendation systems, network optimization, and fraud detection.
-
Human-in-the-Loop Machine Learning:
- Combines human intelligence with machine learning algorithms to solve complex problems.
- Humans provide input, feedback, or corrections to improve the performance or accuracy of machine-learning models.
- Interactive interfaces, active learning, and crowdsourcing techniques are used to involve human expertise in the learning process.
- Human-in-the-loop machine learning is valuable in domains where human judgment, creativity, or domain knowledge is critical.
-
Hybrid Optimization Algorithms:
- Combines multiple optimization techniques or algorithms to solve optimization problems.
- Hybridization aims to leverage the strengths of different algorithms and overcome their individual limitations.
- Examples include combining evolutionary algorithms with local search methods or constraint programming with mathematical programming.
- Hybrid optimization approaches can improve solution quality and convergence speed or handle problem-specific requirements.
-
Interactive Optimization:
- Optimization approaches that involve user interaction and feedback in the decision-making process.
- Users provide preferences, constraints, or priorities to guide the optimization algorithm.
- Interactive optimization can handle complex objectives, multiple stakeholders, and decision-making under uncertainty.
- Interactive visualization, preference elicitation, and multi-criteria decision analysis techniques are employed in this field.
-
Interpretable Machine Learning:
- Focuses on developing machine learning models that are understandable and provide human-interpretable explanations.
- Transparent models, rule-based systems, feature importance analysis, or local explanations are used to enhance interpretability.
- Interpretable machine learning is crucial in domains where explainability, fairness, and accountability are required, such as healthcare and finance.
-
Machine Learning for Healthcare:
- Applies machine learning techniques to analyze medical data, improve patient care, and assist in medical decision-making.
- Includes tasks such as disease diagnosis, treatment prediction, patient monitoring, and medical image analysis.
- Machine learning models can integrate heterogeneous data sources, identify patterns, and provide personalized healthcare solutions.
- Challenges include data privacy, regulatory compliance, and the need for explainability and interpretability.
-
Meta-Learning:
- Focuses on learning to learn or learning to adapt to new tasks or environments.
- Learns higher-level knowledge or models that can be generalized across different learning problems.
- Meta-learning algorithms can learn how to select or combine learning algorithms, adapt hyperparameters, or initialize models.
- Meta-learning enables faster learning, transfer learning, and generalization to new tasks with limited data.
-
Metaheuristics for Optimization Problems:
- General-purpose optimization algorithms that can find good solutions to complex optimization problems.
- Metaheuristics are inspired by natural phenomena or metaphors, such as evolution, swarm intelligence, or physical processes.
- Examples include genetic algorithms, particle swarm optimization, simulated annealing, and ant colony optimization.
- Metaheuristics explore the solution space efficiently and are applicable to various optimization problems without specific domain knowledge.