In this work, we propose combining Transweather method with YOLOX to form a single encoder to be used as an application in autonomous cars. The new Transweather-YOLOX (Tw-YOLOX) encoder takes as input images which are degraded due to foggy weather conditions. A new dataset of images is proposed to be used as a benchmark for this research. Four groups of foggy images are processed. The four are Landscape, Trips, Night images and Real view images. Images are collected from the internet to create three groups namely Trips, Night and Landscape images. For each image, two versions are created using Photoshop which are the ground truth original image and the foggy input image. The proposed method predicts the clear image from the degraded foggy one and detects the objects found in the image. Results show success in the detection of objects with almost equal success to that achieved for those which are ground truth ones. An unconventional simple method is used to describe the success of the encoder in producing images that resembles the original ones such as histograms. Enhancement in the clarity of the foggy input images is evaluated using two parameters namely: Recall percentage and Average (Avg) percentage of accuracy. Using the proposed encoder, both parameters have improved for the predicted output images when compared to the foggy input ones. The ground truth clear images are used as a reference to evaluate the success achieved. Finally, the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) parameters are used to compare the performance of the proposed method with that of the Transweather method. For the proposed method, the PSNR value is 26.8 and the SSIM value is 0.88, which are almost equal to those of the Transweather method.