School of Agriculture, Food and Ecosystem Sciences - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 22
  • Item
    No Preview Available
    The Impact of Wet Fermentation on Coffee Quality Traits and Volatile Compounds Using Digital Technologies
    Wu, H ; Viejo, CG ; Fuentes, S ; Dunshea, FRR ; Suleria, HAR (MDPI, 2023-01)
    Fermentation is critical for developing coffee’s physicochemical properties. This study aimed to assess the differences in quality traits between fermented and unfermented coffee with four grinding sizes of coffee powder using multiple digital technologies. A total of N = 2 coffee treatments—(i) dry processing and (ii) wet fermentation—with grinding levels (250, 350, 550, and 750 µm) were analysed using near-infrared spectrometry (NIR), electronic nose (e-nose), and headspace/gas chromatography–mass spectrometry (HS-SPME-GC-MS) coupled with machine learning (ML) modelling. Most overtones detected by NIR were within the ranges of 1700–2000 nm and 2200–2396 nm, while the enhanced peak responses of fermented coffee were lower. The overall voltage of nine e-nose sensors obtained from fermented coffee (250 µm) was significantly higher. There were two ML classification models to classify processing and brewing methods using NIR (Model 1) and e-nose (Model 2) values as inputs that were highly accurate (93.9% and 91.2%, respectively). Highly precise ML regression Model 3 and Model 4 based on the same inputs for NIR (R = 0.96) and e-nose (R = 0.99) were developed, respectively, to assess 14 volatile aromatic compounds obtained by GC-MS. Fermented coffee showed higher 2-methylpyrazine (2.20 ng/mL) and furfuryl acetate (2.36 ng/mL) content, which induces a stronger fruity aroma. This proposed rapid, reliable, and low-cost method was shown to be effective in distinguishing coffee postharvest processing methods and evaluating their volatile compounds, which has the potential to be applied for coffee differentiation and quality assurance and control.
  • Item
    Thumbnail Image
    Livestock Identification Using Deep Learning for Traceability
    Dac, HH ; Gonzalez Viejo, C ; Lipovetzky, N ; Tongson, E ; Dunshea, FR ; Fuentes, S (MDPI, 2022-11)
    Farm livestock identification and welfare assessment using non-invasive digital technology have gained interest in agriculture in the last decade, especially for accurate traceability. This study aimed to develop a face recognition system for dairy farm cows using advanced deep-learning models and computer vision techniques. This approach is non-invasive and potentially applicable to other farm animals of importance for identification and welfare assessment. The video analysis pipeline follows standard human face recognition systems made of four significant steps: (i) face detection, (ii) face cropping, (iii) face encoding, and (iv) face lookup. Three deep learning (DL) models were used within the analysis pipeline: (i) face detector, (ii) landmark predictor, and (iii) face encoder. All DL models were finetuned through transfer learning on a dairy cow dataset collected from a robotic dairy farm located in the Dookie campus at The University of Melbourne, Australia. Results showed that the accuracy across videos from 89 different dairy cows achieved an overall accuracy of 84%. The computer program developed may be deployed on edge devices, and it was tested on NVIDIA Jetson Nano board with a camera stream. Furthermore, it could be integrated into welfare assessment previously developed by our research group.
  • Item
    No Preview Available
    Virtual reality environments on the sensory acceptability and emotional responses of no- and full-sugar chocolate
    Torrico, DD ; Sharma, C ; Dong, W ; Fuentes, S ; Viejo, CG ; Dunshea, FR (ELSEVIER, 2021-02)
  • Item
    Thumbnail Image
    Digital technologies to assess yoghurt quality traits and consumers acceptability
    Gupta, MK ; Viejo, CG ; Fuentes, S ; Torrico, DD ; Saturno, PC ; Gras, SL ; Dunshea, FR ; Cottrell, JJ (WILEY, 2022-10)
  • Item
    Thumbnail Image
    Biometric Physiological Responses from Dairy Cows Measured by Visible Remote Sensing Are Good Predictors of Milk Productivity and Quality through Artificial Intelligence
    Fuentes, S ; Gonzalez Viejo, C ; Tongson, E ; Lipovetzky, N ; Dunshea, FR (MDPI, 2021-10)
    New and emerging technologies, especially those based on non-invasive video and thermal infrared cameras, can be readily tested on robotic milking facilities. In this research, implemented non-invasive computer vision methods to estimate cow's heart rate, respiration rate, and abrupt movements captured using RGB cameras and machine learning modelling to predict eye temperature, milk production and quality are presented. RGB and infrared thermal videos (IRTV) were acquired from cows using a robotic milking facility. Results from 102 different cows with replicates (n = 150) showed that an artificial neural network (ANN) model using only inputs from RGB cameras presented high accuracy (R = 0.96) in predicting eye temperature (°C), using IRTV as ground truth, daily milk productivity (kg-milk-day-1), cow milk productivity (kg-milk-cow-1), milk fat (%) and milk protein (%) with no signs of overfitting. The ANN model developed was deployed using an independent 132 cow samples obtained on different days, which also rendered high accuracy and was similar to the model development (R = 0.93). This model can be easily applied using affordable RGB camera systems to obtain all the proposed targets, including eye temperature, which can also be used to model animal welfare and biotic/abiotic stress. Furthermore, these models can be readily deployed in conventional dairy farms.
  • Item
    Thumbnail Image
    Digital Integration and Automated Assessment of Eye-Tracking and Emotional Response Data Using the BioSensory App to Maximize Packaging Label Analysis
    Fuentes, S ; Gonzalez Viejo, C ; Torrico, DD ; Dunshea, FR (MDPI, 2021-11)
    New and emerging non-invasive digital tools, such as eye-tracking, facial expression and physiological biometrics, have been implemented to extract more objective sensory responses by panelists from packaging and, specifically, labels. However, integrating these technologies from different company providers and software for data acquisition and analysis makes their practical application difficult for research and the industry. This study proposed a prototype integration between eye tracking and emotional biometrics using the BioSensory computer application for three sample labels: Stevia, Potato chips, and Spaghetti. Multivariate data analyses are presented, showing the integrative analysis approach of the proposed prototype system. Further studies can be conducted with this system and integrating other biometrics available, such as physiological response with heart rate, blood, pressure, and temperature changes analyzed while focusing on different label components or packaging features. By maximizing data extraction from various components of packaging and labels, smart predictive systems can also be implemented, such as machine learning to assess liking and other parameters of interest from the whole package and specific components.
  • Item
  • Item
    Thumbnail Image
    Assessment of Beer Quality Based on a Robotic Pourer, Computer Vision, and Machine Learning Algorithms Using Commercial Beers
    Viejo, CG ; Fuentes, S ; Torrico, DD ; Howell, K ; Dunshea, FR (WILEY, 2018-05)
    UNLABELLED: Sensory attributes of beer are directly linked to perceived foam-related parameters and beer color. The aim of this study was to develop an objective predictive model using machine learning modeling to assess the intensity levels of sensory descriptors in beer using the physical measurements of color and foam-related parameters. A robotic pourer (RoboBEER), was used to obtain 15 color and foam-related parameters from 22 different commercial beer samples. A sensory session using quantitative descriptive analysis (QDA® ) with trained panelists was conducted to assess the intensity of 10 beer descriptors. Results showed that the principal component analysis explained 64% of data variability with correlations found between foam-related descriptors from sensory and RoboBEER such as the positive and significant correlation between carbon dioxide and carbonation mouthfeel (R = 0.62), correlation of viscosity to sensory, and maximum volume of foam and total lifetime of foam (R = 0.75, R = 0.77, respectively). Using the RoboBEER parameters as inputs, an artificial neural network (ANN) regression model showed high correlation (R = 0.91) to predict the intensity levels of 10 related sensory descriptors such as yeast, grains and hops aromas, hops flavor, bitter, sour and sweet tastes, viscosity, carbonation, and astringency. PRACTICAL APPLICATIONS: This paper is a novel approach for food science using machine modeling techniques that could contribute significantly to rapid screenings of food and brewage products for the food industry and the implementation of Artificial Intelligence (AI). The use of RoboBEER to assess beer quality showed to be a reliable, objective, accurate, and less time-consuming method to predict sensory descriptors compared to trained sensory panels. Hence, this method could be useful as a rapid screening procedure to evaluate beer quality at the end of the production line for industry applications.
  • Item
  • Item
    Thumbnail Image
    Non-Contact Heart Rate and Blood Pressure Estimations from Video Analysis and Machine Learning Modelling Applied to Food Sensory Responses: A Case Study for Chocolate
    Viejo, CG ; Fuentes, S ; Torrico, DD ; Dunshea, FR (MDPI, 2018-06)
    Traditional methods to assess heart rate (HR) and blood pressure (BP) are intrusive and can affect results in sensory analysis of food as participants are aware of the sensors. This paper aims to validate a non-contact method to measure HR using the photoplethysmography (PPG) technique and to develop models to predict the real HR and BP based on raw video analysis (RVA) with an example application in chocolate consumption using machine learning (ML). The RVA used a computer vision algorithm based on luminosity changes on the different RGB color channels using three face-regions (forehead and both cheeks). To validate the proposed method and ML models, a home oscillometric monitor and a finger sensor were used. Results showed high correlations with the G color channel (R² = 0.83). Two ML models were developed using three face-regions: (i) Model 1 to predict HR and BP using the RVA outputs with R = 0.85 and (ii) Model 2 based on time-series prediction with HR, magnitude and luminosity from RVA inputs to HR values every second with R = 0.97. An application for the sensory analysis of chocolate showed significant correlations between changes in HR and BP with chocolate hardness and purchase intention.