Computing and Information Systems - Theses
Now showing items 1-12 of 388
Information systems-enabled sustainability transformation in food supply chain management: a multi-theory perspective
There have been growing expectations that the food industry should improve their economic, environmental, and social impacts simultaneously. Compared to other industries, the food industry faces pressing environmental and social issues including food waste due to shelf life constraint, disruptions caused by weather or pests, the use of toxic pesticides in farming, food contamination, child labour, and human rights violation. Consequently, organisations in food supply chains are pressured to integrate environmental and social objectives, or known as sustainability, into their supply chain management. However, transforming towards a sustainable supply chain is challenging. It is inter-organisational in nature, involving different and sometimes conflicting, objectives among various stakeholders. Moreover, successful sustainability transformation requires a set of specific resources and organisational capabilities that are often supported by technologies in general and information systems (IS) in specific. Nonetheless, the previous studies do not inform us adequately about how we can particularly use IS to develop the necessary capabilities to engage in sustainable practice. This study addresses the current knowledge gaps by investigating the following research question: “How do IS support the sustainability transformation in food supply chains?” This study applies Stakeholder theory, Affordance Theory, and Dynamic Capability Theory to guide the research in planning, execution, and data analysis. The study adopts a multiple case study approach involving five Indonesian food manufacturers and their suppliers, resulting in the development of an IS-enabled sustainability transformation model that addresses the research question. The IS-enabled sustainability transformation model presents key elements that contribute to the successful sustainability transformation in supply chains. The model describes the interactions between organisations and IS that result in the identification of nine possibilities for actions, which are referred to as IS affordances. The actualisation of these affordances, in turn, leads to the development of a set of sustainability capabilities. The exercise of these sustainability capabilities collectively contributes to the development of dynamic sustainability capabilities pertinent to a successful transformation process. In short, the study argues that by developing specific dynamic capabilities enabled by IS, organisations can enhance their change process towards becoming sustainable entities. The thesis advances the current knowledge at the intersection of the SSCM and IS fields in the following ways: 1. It improves our understanding of IS and the potential affordances emerging from its material properties, sustainability goals, and socio-technical conditions. 2. It extends the current knowledge of how IS enable the development of essential sustainability capabilities by applying a novel combination of Stakeholder theory, Affordance Theory, and Dynamic Capability Theory 3. It provides rich empirical evidence demonstrating that firms require certain dynamic capabilities to respond to challenges posed by emerging environmental and social issues. 4. It extends the literature by presenting a holistic view of sustainability transformation. 5. It provides insights into how IS can support firms to anticipate and deal with challenging social issues in supply chains. 6. It enhances our understanding of how sustainability transformation occurs in a developing country.
Learning to generalise through features
A Markov decision process (MDP) cannot be used for learning end-to-end control policies in Reinforcement Learning when the dimension of the feature vectors changes from one trial to the next. For example, this difference is present in an environment where the number of blocks to manipulate can vary. Because we cannot learn a different policy for each number of blocks, we suggest framing the problem as a POMDP instead of the MDP. It allows us to construct a constant observation space for a dynamic state space. There are two ways we can achieve such construction. First, we can design a hand-crafted set of observations for a particular problem. However, that set cannot be readily transferred to another problem, and it often requires domain-dependent knowledge. On the other hand, a set of observations can be deduced from visual observations. This approach is universal, and it allows us to easily incorporate the geometry of the problem into the observations, which can be challenging to hard-code in the former method. In this Thesis, we examine both of these methods. Our goal is to learn policies that can be generalised to new tasks. First, we show that a more general observation space can improve the performance of policies tested in untrained tasks. Second, we show that meaningful feature vectors can be obtained from visual observations. If properly regularised, these vectors can reflect the spacial structure of the state space and used for planning. Using these vectors, we construct an auto-generated reward function, able to learn working policies.
Scalable clustering of high dimensional data in non-disjoint axis-parallel subspaces
Clustering is the task of grouping similar objects together, where each group formed is called a cluster. Clustering is used to discover hidden patterns or underlying structures from the data, and has a wide range of applications in areas such as the Internet of Things (IoT), biology, medicine, marketing, business, and computing. Recent developments in sensor and storage technology have led to a rapid growth of data, both in terms of volume and dimensionality. This raises challenges for existing clustering algorithms and led to the development of subspace clustering algorithms that cope with the characteristics, volumes, and dimensionality of the datasets that are now available. In this thesis, we address the challenges of finding subspace clusters in high dimensional data to achieve subspace clustering with high quality and scalability. We provide a comprehensive literature review of existing algorithms, and identify the open challenges in subspace clustering of high dimensional data that we address in this thesis, namely: devising appropriate similarity measures, finding non-disjoint subspace clusters, and achieving scalability to high dimensional data. We further illustrate these challenges in a real-life application. We show that clustering can be used to construct a meaningful model of the pedestrian distribution in the City of Melbourne, Australia, in low dimensional space. However, we demonstrate that the clustering quality deteriorates rapidly as the number of dimensions (pedestrian observation points) increase. This also serves as a motivating example on why subspace clustering is needed and what challenges need to be addressed. We first address the challenge of measuring similarity between data points, which is a key challenge in analyzing high dimensional data that directly impacts the clustering results. We propose a novel method that generates meaningful similarity measures for subspace clustering. Our proposed method considers the similarity between any two points as the union of base similarities that are frequently observed in lower dimensional subspaces. This allows our method to first search for similarity in lower dimensional subspaces and aggregate these similarity values to determine the overall similarity. We show that this method can be applied for measuring similarity based on distance, density, and grids, which enables our similarity measurements to be used with different types of clustering algorithms, i.e., distance-based, density-based, and grid-based clustering algorithms. We then use our similarity measurement to build a subspace clustering algorithm that can find clusters in non-disjoint subspaces. Our proposed algorithm follows a bottom-up strategy. The first phase of our algorithm searches for base clusters in low dimensional subspaces. Subsequently, the second phase forms clusters in higher dimensional subspaces by aggregating these base clusters. The novelty of our proposed method is reflected in both phases. First, we show that our similarity measurement can be integrated in a subspace clustering algorithm. This not only helps prevent the false formation of clusters, but also significantly reduces the time and space complexity of the algorithm by pruning irrelevant subspaces at an early stage. Second, our algorithm transforms the common sequential aggregation of base clusters into a problem of frequent pattern mining. This enables efficient formation of clusters in high dimensional subspaces using FP-Trees. We then demonstrate that our proposed algorithm can outperform traditional subspace clustering algorithms using bottom-up strategies, as well as state-of-the-art algorithms with other clustering strategies, in terms of clustering quality and scalability to large volumes and high dimensionality of data. Subsequently, we evaluate the ability of our proposed subspace clustering algorithm to find clusters in datasets from different real-life applications. We conduct experiments on datasets from three different applications. First, we apply our proposed clustering algorithm to pedestrian measurements in the City of Melbourne, and construct a meaningful model that describes the profiles of the distributions of pedestrians that correspond to pedestrian activities at different times of the day. We then use our clustering algorithm to analyze the impacts of a major change in the public transport of Melbourne on the activities of pedestrians. In the second application, we evaluate the ability of the proposed algorithm to work with very high dimensional data. Specifically, we apply our algorithm on ten gene expression datasets, which comprise up to 10,000 dimensions. Next, we explore the ability of our algorithm to produce clustering results that can be used as an intermediate step that assists the construction of a more complicated model. Specifically, we use our clustering result to build an ensemble classification model, and show that this model improves the accuracy of predicting the car parking occupancy in the central business district (CBD) of the City of Melbourne. By applying our proposed methods in a wide range of applications on datasets with different sizes and dimensionalities, we demonstrate the ability of our algorithm to cluster high dimensional datasets that possess complex structures with high levels of noise and outliers to produce meaningful clustering results that can have practical impact.
Effective Transportation Models for Sharing Economy Through Graph Theory
Ride-sharing and crowdsourced delivery are two sharing-economy transportation models that have been a trend in academic research and industry. Although large companies have popularized these terms through their platforms and the online services they offer, their versions do not follow the sharing-economy vision. Ride- and delivery-sourcing are the proper names for such services as these companies are outsourcing private drivers to serve transportation requests. As a consequence, instead of reducing the number of vehicles on the roads, the opposite effect has been observed. Congestion and pollution, in some US cities and Europe, have ride- and delivery-sourcing as their main contributors. Ride- and delivery-sourcing are convenient for drivers, as they are making a living from them, and for customers, as they pay less compared with a taxi (resp. a courier company). Therefore, this thesis proposes novel ride-sharing and crowdsourced delivery models that can compete with the convenience, trust, and incentives provided by ride- and delivery-sourcing. Our first model exploits the advantages that suitable meeting points have in ride-sharing. In our second model, congestion caused mainly by ride-sharing is considered. Our last model challenges the way retailers optimize their delivery process by regarding routing and pick-up store selection as part of the same optimization process. Optimal travel plans for instances within our novel models are equivalent to solving graph-theoretic problems. We contribute with novel graph problems related to well-known problems such as the Minimum Steiner Tree, Multicast Congestion and Hamiltonian Path. Our novel problems are solved efficiently through heuristic-based algorithms for which we show their effectiveness when compared to optimal solutions and state-of-the-art approaches.
Designing a framework to measure patient-reported health effects and outcomes from using person-generated health data: A simulated rehabilitation system use case
There is rising availability and growing adoption of a variety of health information technologies (HIT) that promote participatory health, by enabling people to access and use a form of health data called person-generated health data (PGHD). PGHD is produced when people use HITs to track, manage, and make sense of their health data; and which often occurs outside of traditional health care settings or with minimum intervention of health professionals. Reported effects from PGHD utilisation vary in the literature. While positive health effects have been reported when people utilise their PGHD, PGHD utilisation could also result in neutral or negative effects. For instance, PGHD utilisation may increase the motivation of PGHD users to adhere to healthy behaviours and self-manage their care. However, it may also make them feel frustrated or discouraged with their progress. In order to build the evidence base about PGHD for health informatics research, it is imperative that a more systematic assessment of PGHD utilisation effects is developed. Therefore, the objective of this thesis has been to design a framework that enables PGHD users to routinely report standard measures of how PGHD utilisation may affect them. The overarching research question posed to fulfill this objective is: How could users’ reporting of PGHD effects be standardised, for any given PGHD technology and health condition? In the process of answering the research question, it was determined that among the different types of patient-reported measures (PRMs), a patient-reported outcome measure of utilising PGHD (PROM-PGHD) should be developed for the exemplar case of simulated stroke rehabilitation technology. This is because a PROM-PGHD would allow for a more precise, patient-centred assessment of a variety of PGHD technologies; and could increase understanding on how such technologies could impact the health status of people. To develop the PROM-PGHD, the author designed a PROM-PGHD Development Method, comprised of multi-step qualitative activities with three main sources of data collection: a literature review to contextualise the problem, a conceptual review of patient-reported measures and development best practice, and audio-recorded focus groups and interviews to elicit input from people with the case health condition. In answering the research question through the development of a PROM-PGHD, a new framework emerged for developing PRM types of utilising PGHD. The framework has four components: 1) problem contextualisation through a literature review, 2) selection of a PRM, 3) review of selected PRM development best practice, and 4) multi-step qualitative development of a type of PRM of utilising PGHD, consolidation of insights, and reporting. This thesis contributes to health informatics research by providing early, significant steps towards the goal of using systematic, validated processes to generate evidence on PGHD utilisation health effects and improve PGHD technology design.
Gaze-Based Intention Recognition for Human-Agent Collaboration: Towards Nonverbal Communication in Human-AI Interaction
Human-agent collaboration has repeatedly been proposed over the decades as a way forward to leverage the strengths of artificial intelligence. As it has become common for humans to work and play alongside intelligent agents, it is increasingly imperative to improve the capacity of agents to interact with their human counterparts socially, naturally and effectively. However, current agents are still limited in their capacity to recognise nonverbal signals and cues, which in turn, limits their capabilities for natural interaction. This thesis addresses this limitation by investigating how artificial agents might support humans in real-time collaboration, given the increased capacity to recognise human intentions afforded by processing their gaze data in real-time. We hypothesise that a socially interactive agent with an increased capacity to recognise intentions can drastically improve its interactive capability, such as by adapting its recommendations to their anticipated intentions as well as to the intentions of others. Using a scenario-based based design approach, we designed five studies to inform and evaluate the different capabilities of a collaborative gaze-enabled intention-aware artificial agent. In Studies 1 and 2, we first evaluate the capacity of human subjects to perform intention recognition using gaze visualisation and its corresponding effects in a competitive gameplay setting. The findings showed that humans players could improve their capacity to infer their intentions of their opponent when shown a live visualisation of their opponent's gaze throughout the game. However, this capacity can be hampered when the opponent was aware that their gaze was being watched. The findings further indicate that humans have a limited capacity in performing gaze-based intention recognition, suggesting that the task may be more suitable for an artificial agent that is trained to process the rich multimodal information available in our setting. In Study 3, we present the implementation details and evaluation of a gaze-enabled intention-aware artificial agent, developed as part of this thesis, that incorporates gaze into its intention recognition process. The evaluation, which uses the data from the previous two studies, demonstrates that incorporating gaze into the agent's planning process not only increases the agent's capacity to recognise intentions but also that it performs better overall than human subjects. In Studies 4 and 5, we operationalise the artificial agent by first giving the agent both the ability to communicate intentions of their opponent to its human collaborator and to explain its reasoning process if required. Subsequently, we evaluated the experience of the players playing the game with and without the assistance of the agent, which ultimately provided insights into how we can further improve the interaction between the human and an intention-aware artificial agent. The findings in this thesis resulted in three contributions towards the understanding of how artificial agents can support human-agent collaboration, given the ability increased capacity to recognise intentions with eye-tracking. The findings from Studies 1, 2 and 3 extend the relationship between gaze awareness and intention, by demonstrating that gaze when tracked over time, can lead to the detection of distal intentions (i.e. long-term intentions that often require several steps to be fulfilled). Following, Studies 3, 4 and 5 contribute to the design of a collaborative gaze-enabled intention-aware artificial agent, and the demonstration of increased situation awareness through gaze awareness for human-agent collaboration. Overall, the thesis highlights the importance of incorporating nonverbal communication in human-AI interaction.
Efficient Stateful Computations in Distributed Stream Processing
Stream processing is used in a plethora of applications that deal with high volumes and varieties of data. The focus towards scalable and efficient stream processing solutions has been increasing due to the vast number of time-sensitive applications such as electronic trading, fraud detection that have low latency requirements. While stream processing systems were originally envisaged to use only stateless computations, the use of stateful computations has grown to accommodate a greater range of complex stream processing applications in various domains. Unlike stateless computations, supporting stateful computations requires addressing new challenges including state distribution to achieve scalability and state sharing among resources in a distributed environment. For instance, most stateful computations have synchronization requirements that need to be satisfied to guarantee the correctness of the results. Therefore, efficient mechanisms that can support scalable state distribution and state sharing while ensuring correctness of the results are needed to satisfy the low latency requirement of stream processing applications. Moreover, a fault-tolerance mechanism to recover state after failures is an essential functionality required to support stateful computations and minimizing the overhead imposed by the fault-tolerance mechanism is another challenge associated with stateful computations. This thesis first focuses on providing models to support complex stateful use cases. Windows that are used to partition the continuous input streams expected in streaming applications are a main component of stream processing systems. The existing models that define window semantics do not represent use cases that have a hierarchy of window stages and therefore, we propose a generic model for stream processing that supports a hierarchical approach to windowing. Then we propose a communication model to support iterative computations which is one of the most common stateful computation types. Due to communication restrictions that limit the ways to share the state of iterative computations, existing approaches used to represent iterative computations have limitations in terms of scalability and efficiency. We address these scalability issues and provide an efficient way to share the state of iterative computations in a distributed environment. We demonstrate that our model can support different iterative algorithms that have complex communication patterns and show the scalability and high performance of the proposed model compared to the traditional approaches used for constructing iterative streaming applications. For example, our model outperforms existing state-of-the-art solutions 72% in terms of throughput and 65% in terms of latency in some cases. Next, we investigate the most common fault-tolerance approach used by existing systems, checkpointing and address how we can minimize the overhead imposed by the checkpointing process. We derive an expression for the optimal checkpoint interval that gives the maximum system utilization using a theoretical model and validate the model using a set of simulations. To the best of our knowledge, this is the first theoretical optimization framework for stream processing systems that use a global checkpointing approach. Our model yields an elegant expression for the optimal checkpoint interval, interestingly showing the optimal checkpoint interval to be dependent only on the checkpoint cost and the failure rate of the system. Next, we use the derived optimal checkpoint interval in real-world streaming applications and demonstrate that the theoretical optimal interval can improve the performance of practical applications. We demonstrate that our theoretical optimal checkpoint interval can achieve utilization improvements from 10% - 200% for a range of failure rates from 0.3 failures per hour to 0.075 failures per minute compared to the default checkpoint interval of 30 minutes used by most systems. Moreover, we show that the optimal interval results in lower latency and higher throughput, with 54% throughput increase and 58% latency decrease for some cases. Then we investigate the multi-level checkpointing approach which is introduced to address the inefficiencies of single-level checkpointing and derive the optimal checkpointing parameters that minimize the overhead of the multi-level checkpointing process. This work is the first to present a theoretical framework for determining optimal parameter settings in a multi-level global checkpointing system that uses a single periodic checkpoint interval. We demonstrate that our solution outperforms existing single level optimizations in terms of utilization by as much as 36% in some cases.
Profit optimization of resource management for big data analytics-as-a-service platforms in cloud computing environments
Discovering optimal resource management solutions to support data analytics to extract value from big data is an increasingly important research area. It is fair to say that the success of many organizations, companies, and individuals now relies heavily on data analytics solutions. Cloud computing greatly supports big data analytics by providing scalable resources based on user demand and supporting elastic resource provisioning in a pay-as-you-go model. Big data Analytics as a Service (AaaS) platforms provision AaaS to various domains as consumable services in an easy to use manner across cloud computing environments. AaaS platforms aim to deliver efficient data analytics solutions to benefit decision-making and problem solving in a wide range of application domains such as engineering, science, and government. However, big data analytics solutions face a range of challenges: the dynamic nature of query requests; the heterogeneity of cloud resources; the different Quality of Service (QoS) requirements; the potential for lengthy data processing times and associated expensive resource costs and dealing with big data processing demands under potentially limited/constrained budgets, deadlines and/or data accuracies. The above challenges need to be tackled by efficient resource management solutions to support AaaS platforms to deliver reliable, cost-effective and fast AaaS. Optimal resource management solutions are essential for AaaS platforms to maximize profits and minimize query times while guaranteeing Service Level Agreements (SLAs) during AaaS delivery. To tackle the above challenges, this thesis systematically studies profit optimization solutions to support AaaS platforms. Key contributions are made through a range of resource management solutions. These include admission control and resource scheduling algorithms that enable various problem scenarios where data needs to be processed under heterogeneous, constrained or limited budgets, deadlines, or accuracies with support of data splitting and/or data sampling-based methods to reduce data processing times and costs with potential accuracy trade-offs. These algorithms allow AaaS platforms to optimize profits and minimize query times through optimal resource management solutions, and thereby increase market share by maximizing query admissions and improve reputation by delivering SLA-supported AaaS solutions.
Older Adults Designing Avatars for Self-expression
Representations of older age are frequently associated with bodies in decline. Looking old can trigger discriminatory social behaviours and conceal the richness of the lived experience. Avatars, full-body digital self-representations of the user, influence the way users think and behave in virtual environments (VE). As older adults increasingly participate in online spaces, which use avatars for self-representation, it is essential to understand how to best support their online self-representations. This thesis addresses this gap by engaging older adults in designing their full-body avatars. Across four studies using research through design, this thesis provides older adults’ views about how they want to be graphically represented. In Study 1, I conducted gameplay observations and semi-structured interviews that provided an initial understanding of how older adults who play online games projected aspects of their identities into their player self-representations. The study revealed that participants designed their player self-representations by projecting aspects of their past (former) self and embracing their present older selves. For Studies 2, 3, and 4, I engaged a group of older adults aged 70-80 in designing avatars. In Study 2, older adults designed a full-body avatar during a group design workshop. The study demonstrated that older adults negotiate with ageing stereotypes when creating their avatar designs. Some participants reproduced realistic representations of the aged appearance that suggests acceptance of ageing bodies; others idealised their avatars by depicting healthy bodies or societal ideals of youth. This study highlighted that the character creation interfaces (CCIs) (where users designed the avatars) presented limited design choices to portray ageing features. Informed by the previous outcomes, Study 3 explored if the graphic styles of the avatar customisation prompted older adults’ expressions of identity. Through extended individual design sessions, participants designed a photorealistic avatar and a cartoon avatar. The analysis of the individual design journeys demonstrates that participants conformed to social norms through the design of the photorealistic avatar and rebelled against these social norms through the design of the cartoon avatar. While the photorealistic avatar prompted participants to reflect on the appearance of the ageing body, the cartoon avatar design supported the expression of hidden aspects of the self. Finally, in Study 4, older adults participated in virtual reality sessions over four months, choosing between the predesigned photorealistic or cartoon avatar for each session. Along with the VR sessions, some participants modified their avatar designs in further avatar customisation sessions. This study evaluated how the context (people, place and purpose) influences older adults’ expressions of identity through avatars. The analysis highlighted gender differentiation and revealed that participants chose the photorealistic avatar to conform to social norms of an older age when meeting peers of similar age. This research contributes with a schematic that illustrates how older adults’ self-expression through avatars is mediated by the design choices available from the avatar creation software, and, by the context in which the avatar is used. Furthermore, this research shows that designing avatars is a powerful mechanism that supports self-expression, reflection and experimentation. These results have implications for designing CCI online environments that cater to the preferences of older individuals.
Embedding Graphs for Shortest-Path Distance Predictions
Graph is an important data structure and is used in an abundance of real-world applications including navigation systems, social networks, and web search engines, just to name but a few. We study a classic graph problem – computing graph shortest-path distances. This problem has many applications, such as finding nearest neighbors for place of interest(POI) recommendation or social network friendship recommendation. To compute a shortest-path distance, traditional approaches traverse the graph to find the shortest path and return the path length. These approaches lack time efficiency over large graphs. In the applications above, the distances may be needed first (e.g., to rank POIs), while the actual shortest paths may be computed later (e.g., after a POI has been chosen). Thus, an alternative approach precomputes and stores the distances, and answers distance queries with simple lookups. This approach, however, falls short in the space cost – O(n^2) in the worst-case for vertices, even with various optimizations. To address these limitations, we take an embedding based approach to predict the shortest-path distance between two vertices using their embeddings without computing their path online or storing their distance offline. Graph embedding is an emerging technique for graph analysis that has yielded strong performance in applications such as node classification, link prediction, graph reconstruction, and more. We propose a representation learning approach to learn a k-dimensional (k<<n) embedding for every vertex. This embedding preserves the distance information of the vertex to the other vertices. We then train a multi-layer perceptron (MLP) to predict the distance between two vertices given their embeddings. We thus achieve fast distance predictions with-out a high space cost (i.e., only O(kn)). Experimental results on road network graphs, social network graphs, and web document graphs confirm these advantages, while our approach also produces distance predictions that are up to 97% more accurate than those by the state-of-the-art approaches. Our embeddings are not limited for only distance predictions. We further study their applicability on other graph problems such as link prediction and graph reconstruction. Experimental results show that our embeddings are highly effective in these tasks.
Individual use of Enterprise 2.0 and its impact on social capital within large organisations
Over the past years, there has been a significant momentum in the adoption of Enterprise 2.0 in larger organisations. Enterprise 2.0 adoption encapsulates the use of integrated social media tools in a unified social networking platform to support business operations. Many organisations are adopting Enterprise 2.0, with the hope of achieving the benefits of better knowledge retention, faster information discovery, innovation, employee engagement and higher productivity through social networking. Whilst organisations herald Enterprise 2.0 with great promises, it is still not clear how the individuals’ use of Enterprise 2.0 will result in organisational benefits. This thesis aims to contribute to a better understanding of the individual use of Enterprise Social Networks (Enterprise 2.0) in large organisations and how this leads to organisational benefits. To achieve this objective, this research applies social capital theory as the basis to analyse the organisation’s rich set of relationships and the organisational value they bring as a result of using Enterprise 2.0. In-depth research and a critical review of relevant previous studies and established theories have been conducted to delineate the components within structural, cognitive, and relational social capital dimensions. Sixty in-depth interviews were conducted with participants from six large organisations in Australia who were actively using an existing implementation of Enterprise 2.0 (i.e. Yammer and Oracle Social Network). For an all-encompassing and unbiased view, participants from varying roles and responsibilities, levels of expertise and usage types were selected from the identified organisation. To ensure unbiased and granular categorisation of use modes, the research analysed the data collected from the interviews, observations and notes using a grounded approach to identify emerging trends and patterns of individual Enterprise 2.0 use. The results generate new insights in the form of seven distinct individual use modes of Enterprise 2.0. The findings also present rich insights into the impact of these varied individual use modes on each structural, relational, and cognitive social capital dimension. In addition, the research also reveals novel insights on how different user types benefited from using Enterprise 2.0 from a social capital perspective. Finally, the study demonstrates the supportive and enabling role of Enterprise 2.0 as a platform to build social capital. The research contributes to a deeper understanding of the individual use of Enterprise 2.0 and social capital theory. The research also includes seven insights to help shape future research and as outcome to guide managers in large organisation to set up, manage, and promote individual use of Enterprise 2.0.
Mapping the structural connectome and predicting functional connectivity with deep learning methods
Mapping the human connectome is a major goal in neuroscience, where connectome refers to a comprehensive network description of the brain. This network is often represented as a graph, where nodes denote brain regions and edges represent white matter pathways. Tractography is a computational reconstruction method based on diffusion-weighted magnetic resonance imaging (dMRI) that estimates millions of streamlines that trace out the trajectories of white matter fiber bundles. The number of streamlines interconnecting each pair of regions comprising a predefined cortical parcellation is computed to yield a structural connectivity matrix. Network analyses of these connectivity matrices have yielded new insights into brain disorders (such as Schizophrenia, Alzheimer’s disease), cognition and neurodevelopmental processes. Moreover, the temporal dependence of neuronal activity patterns of different brain regions (functional connectivity) is also associated with underlying neuronal pathways (structural connectivity). In this thesis, we analyse the capabilities of state-of-the-art tractography algorithms (deterministic and probabilistic) for mapping connectomes and develop algorithms that overcome the limitations of conventional tractography algorithms for connectome mapping. Also, we utilize the structure-functional coupling for training Deep Neural Nets to predict the functional connectivity from structural connectivity. In the first part of the thesis, we develop numerical connectome phantoms that feature realistic network topologies and match to the fiber complexity of in vivo dMRI. The connectivity between pairs of regions was predefined for these phantoms. The phantoms are utilized to evaluate the performance of tensor-based and multi-fiber implementations of deterministic and probabilistic tractography. We found that multi-fiber deterministic tractography yields the most accurate connectome reconstructions, whereas probabilistic algorithms are hampered by an abundance of spurious connections. It is essential to omit connections with the fewest number of streamlines (thresholding) when using probabilistic algorithms for mapping connectomes. The study suggests that multi-fiber deterministic tractography is well suited for connectome mapping, regardless of the streamline threshold. In the second part, we propose a novel framework to map structural connectomes using deep learning. This framework not only enables connectome mapping with a convolutional neural network (CNN) but can also be straightforwardly incorporated into conventional connectome mapping pipelines (using tractography) to enhance accuracy. This framework involves decomposing the entire brain volume into overlapping blocks. Blocks are sufficiently small to ensure that a CNN can be efficiently trained to predict each block’s internal connectivity architecture. Later, a block stitching algorithm is proposed to rebuild the full brain volume from these blocks and thereby map end-to-end connectivity matrices. Performance is evaluated using simulated dMRI data generated from numerical connectome phantoms with known ground truth connectivity. Due to the redundancy achieved by allowing blocks to overlap, block decomposition and stitching steps can enhance the accuracy of probabilistic and deterministic tractography algorithms by up to 20-30%. Various studies have reported that functional brain connectivity is associated with underlying structural characteristics. In the third part of the thesis, we utilize this structure-functional coupling to develop a novel framework using deep learning that predicts functional connectivity from structural connectivity. The framework predicts functional connectivity without explicitly modelling the biophysical characteristics of the brain. We have demonstrated that a neural network can predict functional connectivity with high accuracy while preserving the inter-subject functional differences. Furthermore, we also demonstrated that functional connectivity could be used to predict human behavior, namely cognition. Altogether, the analyses and frameworks presented in this thesis aid in extracting structural connectivity and understanding the complex relationships between functional and structural connectivity in the human brain.