✨ 本团队擅长数据搜集与处理、建模仿真、程序设计、仿真代码、论文写作与指导,毕业论文、期刊论文经验交流。
✅ 专业定制毕设、代码
✅ 成品或定制,查看文章底部微信二维码
(1) 基于多因子进化优化的多正则化稀疏重构信号降噪方法
滚动轴承在复杂工业环境中运行时,其振动信号中的故障特征往往被强烈的背景噪声所掩盖,特别是在故障早期阶段,由局部缺陷引起的微弱冲击响应信号淹没在噪声中难以有效识别。稀疏表示理论为从噪声信号中提取冲击特征提供了一种有效的数学框架,其核心思想是利用信号在某个合适字典下的稀疏性先验来实现信号与噪声的分离。然而稀疏重构问题的求解涉及正则化参数的选择,不同的正则化项对应不同的稀疏性约束形式,参数设置不当会导致重构结果欠稀疏或过稀疏,影响故障特征的提取效果。本研究提出一种基于多因子进化算法的多正则化稀疏重构降噪模型,将多种正则化约束的联合优化问题转化为多目标优化任务。具体而言,将原问题分解为包含不同正则化项的三个子任务,分别对应促进信号整体稀疏性、保护信号局部结构和约束重构误差的优化目标。三个子任务共享待优化的正则化参数,通过协同进化实现全局最优解的搜索。在进化算法的设计上,提出黄金分割搜索策略对种群进行分层管理,确保每个族群包含适应度相近的个体以维持种群多样性。采用两点交叉遗传算子生成新个体,同时保持解的稀疏性特征不被破坏。在局部搜索阶段,引入迭代阈值算法加速子任务的收敛速度。通过仿真信号和实际轴承振动数据的验证表明,即使在信噪比为负十分贝的强噪声干扰下,所提方法仍能有效提取出周期性冲击特征成分,重构信号的信噪比显著提升。
(2) 基于稀疏频率螺旋谱的二维故障特征表示与增强方法
一维时域振动信号虽然包含丰富的故障信息,但其特征表达能力有限,难以充分揭示信号中蕴含的非线性和非平稳特性。将振动信号转换为二维图像表示是提升特征表达能力的有效途径,能够利用图像处理和计算机视觉领域的先进技术进行故障分析。传统的时频图如短时傅里叶变换谱图和小波时频图在强噪声条件下时频分辨率有限,故障特征容易被噪声模糊。本研究在前述稀疏重构降噪的基础上,提出一种全新的稀疏频率螺旋谱二维特征转换方法。该方法首先对经过稀疏重构的去噪信号进行快速傅里叶变换获取其频域表示,然后应用高通滤波预加重处理增强高频冲击成分的能量占比。接下来采用螺旋填充策略将一维频谱数据映射为二维图像矩阵,螺旋填充从图像中心开始沿顺时针方向逐步向外展开,使得频谱的低频成分位于图像中心,高频成分分布于图像边缘,这种排列方式符合人类视觉系统对中心区域更加敏感的特点。为了进一步增强图像特征的判别性和鲁棒性,引入改进的尺度不变特征变换算法对螺旋谱图像进行处理,提取具有旋转不变性和尺度不变性的关键点描述子,并根据关键点分布重新调整图像的对比度和亮度,使不同故障类型之间的视觉差异更加明显。与现有的格拉姆角场、马尔可夫转移场和递归图等二维特征表示方法相比,稀疏频率螺旋谱在噪声鲁棒性、特征还原度和人眼可读性方面均展现出显著优势。
(3) 基于潜在扩散模型的故障样本生成与小样本诊断方法
深度学习模型的优异性能建立在大规模标注数据的基础之上,但在工业实际中获取充足的故障样本面临诸多困难。设备大部分时间处于正常运行状态,故障事件发生的概率相对较低,且某些严重故障类型可能在设备整个服役周期内都不会出现。当训练样本数量不足时,深度神经网络容易出现过拟合现象,在未见过的测试样本上泛化性能急剧下降。生成式人工智能技术为解决小样本问题提供了新的思路,通过学习真实数据的分布规律来生成高质量的合成样本扩充训练数据集。本研究选用潜在扩散模型作为样本生成的核心技术,该模型通过在潜在空间而非原始像素空间进行扩散和去噪过程,大幅降低了计算复杂度同时保持了生成质量。针对潜在扩散模型在处理工业振动数据时效率不高的问题,提出了一种非线性无激活模块来替代原始架构中的部分注意力模块,该模块通过简化的线性变换实现特征映射,在保证生成质量的前提下显著提升了模型的训练和推理速度。将前述稀疏频率螺旋谱方法得到的真实二维故障图像作为训练集训练改进的潜在扩散模型,生成器学习到真实故障样本的分布特性后可以合成任意数量的虚拟故障图像。
import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from scipy.fft import fft, fftfreq from scipy.signal import butter, filtfilt from sklearn.preprocessing import StandardScaler class MultiFactorEvolutionaryOptimizer: def __init__(self, signal, dictionary, pop_size=50, max_iter=100): self.signal = signal self.dictionary = dictionary self.pop_size = pop_size self.max_iter = max_iter self.golden_ratio = (1 + np.sqrt(5)) / 2 def initialize_population(self): population = [] for _ in range(self.pop_size): individual = { 'lambda_l1': np.random.uniform(0.001, 1.0), 'lambda_l2': np.random.uniform(0.001, 1.0), 'lambda_tv': np.random.uniform(0.001, 1.0) } population.append(individual) return population def compute_fitness(self, individual): lambda_l1 = individual['lambda_l1'] lambda_l2 = individual['lambda_l2'] lambda_tv = individual['lambda_tv'] coefficients = self.sparse_reconstruction(lambda_l1, lambda_l2, lambda_tv) reconstructed = self.dictionary @ coefficients reconstruction_error = np.sum((self.signal - reconstructed)**2) sparsity = np.sum(np.abs(coefficients) > 1e-6) l1_norm = np.sum(np.abs(coefficients)) return np.array([reconstruction_error, sparsity, l1_norm]) def sparse_reconstruction(self, lambda_l1, lambda_l2, lambda_tv, max_iter=100): n = self.dictionary.shape[1] x = np.zeros(n) for _ in range(max_iter): gradient = self.dictionary.T @ (self.dictionary @ x - self.signal) gradient += lambda_l2 * x x = x - 0.01 * gradient x = np.sign(x) * np.maximum(np.abs(x) - lambda_l1 * 0.01, 0) return x def golden_section_grouping(self, population, fitnesses): sorted_indices = np.argsort(fitnesses[:, 0]) n_groups = 3 group_sizes = [] remaining = self.pop_size for i in range(n_groups - 1): size = int(remaining / self.golden_ratio) group_sizes.append(size) remaining -= size group_sizes.append(remaining) groups = [] start = 0 for size in group_sizes: groups.append([population[sorted_indices[j]] for j in range(start, start + size)]) start += size return groups def two_point_crossover(self, parent1, parent2): keys = list(parent1.keys()) point1, point2 = sorted(np.random.choice(len(keys), 2, replace=False)) child = {} for i, key in enumerate(keys): if point1 <= i < point2: child[key] = parent2[key] else: child[key] = parent1[key] return child def optimize(self): population = self.initialize_population() fitnesses = np.array([self.compute_fitness(ind) for ind in population]) best_individual = population[np.argmin(fitnesses[:, 0])] best_fitness = np.min(fitnesses[:, 0]) for iteration in range(self.max_iter): groups = self.golden_section_grouping(population, fitnesses) new_population = [] for group in groups: for i in range(len(group)): parent1 = group[i] parent2 = group[np.random.randint(len(group))] child = self.two_point_crossover(parent1, parent2) for key in child: if np.random.rand() < 0.1: child[key] *= np.random.uniform(0.8, 1.2) child[key] = np.clip(child[key], 0.001, 1.0) new_population.append(child) population = new_population[:self.pop_size] fitnesses = np.array([self.compute_fitness(ind) for ind in population]) current_best_idx = np.argmin(fitnesses[:, 0]) if fitnesses[current_best_idx, 0] < best_fitness: best_fitness = fitnesses[current_best_idx, 0] best_individual = population[current_best_idx] return best_individual class SparseFrequencySpiralMap: def __init__(self, image_size=224): self.image_size = image_size def apply_preemphasis(self, signal, alpha=0.97): return np.append(signal[0], signal[1:] - alpha * signal[:-1]) def spiral_fill(self, data, size): image = np.zeros((size, size)) x, y = size // 2, size // 2 dx, dy = 0, -1 data_idx = 0 for _ in range(size * size): if 0 <= x < size and 0 <= y < size: if data_idx < len(data): image[y, x] = data[data_idx] data_idx += 1 if x == y or (x < 0 and x == -y) or (x > 0 and x == 1 - y): dx, dy = -dy, dx x, y = x + dx, y + dy return image def transform(self, signal): preemphasized = self.apply_preemphasis(signal) spectrum = np.abs(fft(preemphasized))[:len(preemphasized)//2] spectrum_normalized = (spectrum - np.min(spectrum)) / (np.max(spectrum) - np.min(spectrum) + 1e-10) target_length = self.image_size * self.image_size if len(spectrum_normalized) < target_length: spectrum_resampled = np.interp( np.linspace(0, 1, target_length), np.linspace(0, 1, len(spectrum_normalized)), spectrum_normalized ) else: spectrum_resampled = spectrum_normalized[:target_length] spiral_image = self.spiral_fill(spectrum_resampled, self.image_size) return spiral_image def enhance_with_sift_inspired(self, image): from scipy.ndimage import gaussian_filter scales = [1, 2, 4] dog_images = [] for i in range(len(scales) - 1): g1 = gaussian_filter(image, scales[i]) g2 = gaussian_filter(image, scales[i + 1]) dog_images.append(g2 - g1) keypoint_map = np.zeros_like(image) for dog in dog_images: threshold = np.std(dog) * 2 keypoints = np.abs(dog) > threshold keypoint_map += keypoints.astype(float) enhanced = image * (1 + 0.5 * keypoint_map) enhanced = (enhanced - np.min(enhanced)) / (np.max(enhanced) - np.min(enhanced) + 1e-10) return enhanced class NAFBlock(nn.Module): def __init__(self, channels): super(NAFBlock, self).__init__() self.conv1 = nn.Conv2d(channels, channels * 2, 1) self.conv2 = nn.Conv2d(channels * 2, channels, 1) self.norm = nn.LayerNorm([channels]) def forward(self, x): identity = x x = self.conv1(x) x1, x2 = x.chunk(2, dim=1) x = x1 * x2 x = self.conv2(x) return x + identity class NAFLatentDiffusion(nn.Module): def __init__(self, latent_dim=64, image_size=224): super(NAFLatentDiffusion, self).__init__() self.latent_dim = latent_dim self.encoder = nn.Sequential( nn.Conv2d(1, 32, 4, 2, 1), nn.ReLU(), nn.Conv2d(32, 64, 4, 2, 1), nn.ReLU(), nn.Conv2d(64, latent_dim, 4, 2, 1) ) self.naf_blocks = nn.Sequential(*[NAFBlock(latent_dim) for _ in range(4)]) self.decoder = nn.Sequential( nn.ConvTranspose2d(latent_dim, 64, 4, 2, 1), nn.ReLU(), nn.ConvTranspose2d(64, 32, 4, 2, 1), nn.ReLU(), nn.ConvTranspose2d(32, 1, 4, 2, 1), nn.Sigmoid() ) self.time_embedding = nn.Sequential( nn.Linear(1, latent_dim), nn.SiLU(), nn.Linear(latent_dim, latent_dim) ) def forward_diffusion(self, x, t, noise=None): if noise is None: noise = torch.randn_like(x) alpha = 1 - t.view(-1, 1, 1, 1) noisy = torch.sqrt(alpha) * x + torch.sqrt(1 - alpha) * noise return noisy, noise def predict_noise(self, noisy_x, t): latent = self.encoder(noisy_x) t_emb = self.time_embedding(t.view(-1, 1)) t_emb = t_emb.view(-1, self.latent_dim, 1, 1) latent = latent + t_emb latent = self.naf_blocks(latent) return self.decoder(latent) def sample(self, num_samples, num_steps=50, device='cpu'): x = torch.randn(num_samples, 1, 224, 224).to(device) for i in range(num_steps, 0, -1): t = torch.full((num_samples,), i / num_steps).to(device) predicted_noise = self.predict_noise(x, t) alpha = 1 - t.view(-1, 1, 1, 1) x = (x - (1 - alpha) / torch.sqrt(1 - alpha) * predicted_noise) / torch.sqrt(alpha) if i > 1: noise = torch.randn_like(x) x = x + torch.sqrt(1 - alpha) * noise * 0.5 return x class SwinTransformerBlock(nn.Module): def __init__(self, dim, num_heads, window_size=7): super(SwinTransformerBlock, self).__init__() self.dim = dim self.num_heads = num_heads self.window_size = window_size self.norm1 = nn.LayerNorm(dim) self.attn = nn.MultiheadAttention(dim, num_heads, batch_first=True) self.norm2 = nn.LayerNorm(dim) self.mlp = nn.Sequential( nn.Linear(dim, dim * 4), nn.GELU(), nn.Linear(dim * 4, dim) ) def forward(self, x): x = x + self.attn(self.norm1(x), self.norm1(x), self.norm1(x))[0] x = x + self.mlp(self.norm2(x)) return x class SwinTransformerClassifier(nn.Module): def __init__(self, image_size=224, patch_size=4, num_classes=4, dim=96, num_heads=3): super(SwinTransformerClassifier, self).__init__() self.patch_embed = nn.Conv2d(1, dim, patch_size, patch_size) num_patches = (image_size // patch_size) ** 2 self.pos_embed = nn.Parameter(torch.randn(1, num_patches, dim)) self.blocks = nn.Sequential(*[SwinTransformerBlock(dim, num_heads) for _ in range(4)]) self.norm = nn.LayerNorm(dim) self.head = nn.Linear(dim, num_classes) def forward(self, x): x = self.patch_embed(x) x = x.flatten(2).transpose(1, 2) x = x + self.pos_embed x = self.blocks(x) x = self.norm(x) x = x.mean(dim=1) return self.head(x) class SmallSampleFaultDiagnoser: def __init__(self, num_classes=4): self.sfsm = SparseFrequencySpiralMap() self.diffusion = NAFLatentDiffusion() self.classifier = SwinTransformerClassifier(num_classes=num_classes) self.scaler = StandardScaler() def preprocess_signal(self, signal): n = len(signal) dictionary = np.eye(n) optimizer = MultiFactorEvolutionaryOptimizer(signal, dictionary, pop_size=30, max_iter=50) best_params = optimizer.optimize() denoised = optimizer.sparse_reconstruction( best_params['lambda_l1'], best_params['lambda_l2'], best_params['lambda_tv'] ) denoised = dictionary @ denoised sfsm_image = self.sfsm.transform(denoised) enhanced = self.sfsm.enhance_with_sift_inspired(sfsm_image) return enhanced def train_diffusion(self, real_images, epochs=100): dataset = torch.utils.data.TensorDataset(torch.FloatTensor(real_images).unsqueeze(1)) loader = torch.utils.data.DataLoader(dataset, batch_size=8, shuffle=True) optimizer = torch.optim.Adam(self.diffusion.parameters(), lr=1e-4) criterion = nn.MSELoss() for epoch in range(epochs): for batch in loader: x = batch[0] t = torch.rand(x.size(0)) noisy_x, noise = self.diffusion.forward_diffusion(x, t) predicted = self.diffusion.predict_noise(noisy_x, t) loss = criterion(predicted, x) optimizer.zero_grad() loss.backward() optimizer.step() def generate_samples(self, num_samples): self.diffusion.eval() with torch.no_grad(): generated = self.diffusion.sample(num_samples) return generated.numpy() def train_classifier(self, signals, labels, augment_ratio=5, epochs=50): real_images = np.array([self.preprocess_signal(s) for s in signals]) self.train_diffusion(real_images, epochs=50) augmented_images = [real_images] augmented_labels = [labels] for label in np.unique(labels): class_images = real_images[labels == label] num_generate = len(class_images) * augment_ratio generated = self.generate_samples(num_generate)[:, 0] augmented_images.append(generated) augmented_labels.append(np.full(num_generate, label)) all_images = np.vstack(augmented_images) all_labels = np.hstack(augmented_labels) dataset = torch.utils.data.TensorDataset( torch.FloatTensor(all_images).unsqueeze(1), torch.LongTensor(all_labels) ) loader = torch.utils.data.DataLoader(dataset, batch_size=16, shuffle=True) optimizer = torch.optim.Adam(self.classifier.parameters(), lr=1e-4) criterion = nn.CrossEntropyLoss() for epoch in range(epochs): for batch_x, batch_y in loader: optimizer.zero_grad() output = self.classifier(batch_x) loss = criterion(output, batch_y) loss.backward() optimizer.step() def diagnose(self, signal): image = self.preprocess_signal(signal) x = torch.FloatTensor(image).unsqueeze(0).unsqueeze(0) self.classifier.eval() with torch.no_grad(): output = self.classifier(x) pred = torch.argmax(output, dim=1).item() fault_types = ['normal', 'inner_race', 'outer_race', 'ball'] return fault_types[pred]具体问题,可以直接沟通
👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇👇