The Electronic Journal of e-Government publishes perspectives on topics relevant to the study, implementation and management of e-Government

For general enquiries email administrator@ejeg.com

Click here to see other Scholarly Electronic Journals published by API
For a range of research text books on this and complimentary topics visit the Academic Bookshop

Information about the European Conference on Digital Government is available here

 

Journal Article

Developing Generic Shared Services for e‑Government  pp31-38

Marijn Janssen, René Wagenaar

© Jun 2004 Volume 2 Issue 1, Editor: Frank Bannister, pp1 - 74

Look inside Download PDF (free)

Abstract

Currently e‑Government initiatives have a highly fragmented nature and are hardly coordinated. An architectural approach aimed at reusing components as shared services can support government agencies in the implementation of their e‑Government initiatives. In this paper we describe research aimed at identifying and prioritising the importance of generic services that can be shared among public agencies. Generic shared services are identified and prioritised by technical experts and government representatives using a group support system session. This has resulted in an action plan to implement the services and use them as part of future e‑Government projects.

 

Keywords: Architecture, group support system, e-Government, shared services, data centres, shared service centre

 

Share |

Journal Article

Long‑term Digital Archiving — Outsourcing or Doing it  pp135-144

Mitja Decman

© Dec 2007 Volume 5 Issue 2, ECEG 2007, Editor: Frank Bannister, pp95 - 224

Look inside Download PDF (free)

Abstract

Governments all over the world are confronted with a new sphere of electronic data that is the consequence of increasingly presented and used information technology (IT). The data is heaping up on desktop computers, servers, tapes, CDs, etc. Not till the last decade did leading employees and the political elite start to ask themselves how will this data be saved as a proof of e‑government actions for the near and far future and our posterity. Considering the nature of electronic form compared to the paper form we can define keeping electronic data as a "non‑stop" job, while keeping the classical paper form can be defined as a "store‑and‑leave" job. New legislation and standards regarding the management and archiving of electronic data arise and so do practical solutions — information systems. At the point of implementation we can be confronted with huge expenses and the question of best implementation. How to solve this issue, considering outsourcing the service of long term digital archiving by external contractors or implementing it by the government itself is the topic of this paper. The paper focuses on organizational, technical and financial aspects of the dilemmas "to outsource or not", "parts or the whole service", how to do it, etc. It analyses the decision factors and tries to make conclusions on the basis of theory and research results from different survey projects. It presents the results of the empirical study of the digital archiving filed in the public sector of Slovenia that also focused on the outsourcing of digital archiving service or different segments of this service. The results from the public sector are also compared with the results for the private sector.

 

Keywords: archiving, electronic data, long term, digital preservation, outsourcing, recordkeeping, digital archive

 

Share |

Journal Article

XML Schema Design and Management for e‑Government Data Interoperability  pp371-380

Thomas Lee, C.T. Hon

© Dec 2009 Volume 7 Issue 4, ECEG 2009, Editor: Frank Bannister, pp295 - 432

Look inside Download PDF (free)

Abstract

One‑stop public services and single window systems are primary goals of many e‑government initiatives. How to facilitate the technical and data interoperability among the systems in different government agencies is a key of meeting these goals. While many software standards, such as Web Services and ebXML, have been formulated to address the interoperability between different technical platforms, the data interoperability problem remains to be a big challenge. The data interoperability concerns how different parties agree on what information to exchange, and the definition and representation of such information. To address this problem, the Hong Kong government has released the XML Schema Design and Management Guide as well as the Registry of Data Standards under its e‑Government Interoperability Framework initiative. This paper introduces how the data modelling methodology provided by the Guide can be used to develop data interfaces and standards for e‑government systems. We also discuss how the Macao government has formulated their data interoperability policy and has applied the Guide in their situation.

 

Keywords: e-government data interoperability, XML schema

 

Share |

Journal Article

Segmentation of the PAYE Anytime Users  pp104-119

Jessica Clancy, Giuseppe Manai, Duncan Cleary

© Dec 2010 Volume 8 Issue 2, ECEG Conference Issue, Editor: Frank Bannister, pp83 - 235

Look inside Download PDF (free)

Abstract

PAYE anytime is a web application designed and implemented by the Office of the Revenue Commissioners of Ireland. The application allows the Pay As You Earn (PAYE) customers in Ireland to manage most of their tax affairs online. By using easily accessible technology, PAYE customers can update their information and process most of their tax credits and reliefs online in a clear and effective manner. This online system was designed and implemented in order to reduce the volume of direct contacts between Revenue and its PAYE customers, to decrease costs and improve overall efficiency and effectiveness within the organisation. Moreover, the usage of such an e‑channel allows Revenue to record important information that can be analysed with the aim of improving overall customer service. Therefore, management of this strategic contact channel is paramount to Revenue’s continued advancement and improvement of its online services. This paper describes a segmentation of PAYE anytime users. The segmentation was conducted to understand the profiles and behaviours of these customers. This unsupervised data mining method produces an unbiased, self directed portrait of PAYE anytime customers. The data analysed were extracted from the weblogs of the PAYE anytime online application, which contains information about the users’ navigation. The data were linked to the users’ information held in the Revenue data warehouse in order to access all recorded details about PAYE anytime users. This information consists of the tax credits claimed, the value of tax credits, time period and similar attributes. By linking the online behaviour with the users’ information and mapping on the demographic details of the users, it was possible to identify the different segments and their profiles. The results of this segmentation improve Revenues understanding of the PAYE customer base. Knowledge gained with this project can be applied in a number of areas. Naturally, the profiles and behaviours associated with each segment can be strategically used for customer intelligence policies, allowing specific services to be tailored around customer profiles. Moreover, the analysis can point to improvements of the design and structure of future iterations of the PAYE anytime application.

 

Keywords: segmentation, weblog analysis, association analysis, data mining, customer behaviour

 

Share |

Journal Article

Predictive Analytics in the Public Sector: Using Data Mining to Assist Better Target Selection for Audit  pp132-140

Duncan Cleary

© Dec 2011 Volume 9 Issue 2, ECEG, Editor: Frank Bannister, pp93 - 222

Look inside Download PDF (free)

Abstract

Revenue, the Irish Tax and Customs Authority, has been developing the use of data mining techniques as part of a process of putting analytics at the core of its business processes. Recent data mining projects, which have been piloted successfully, have de veloped predictive models to assist in the better targeting of taxpayers for possible non‑compliance/ tax evasion, and liquidation. The models aim, for example, to predict the likelihood of a case yielding in the event of an intervention, such as an audit . Evaluation cases have been worked in the field and the hit rate was approximately 75%. In addition, all audits completed by Revenue in the year after the models had been created were assessed using the model probability to yield score, and a significant correlation exists between the expected and actual outcome of the audits. The models are now being developed further, and are in full production in 2011. Critical factors for model success include rigorous statistical analyses, good data quality, softwar e, teamwork, timing, resources and consistent case profiling/ treatments. The models are developed using SAS Enterprise Miner and SAS Enterprise Guide. This work is a good example of the applicability of tools developed for one purpose (e.g. Credit Scori ng for Banking and Insurance) having multiple other potential applications. This paper shows how the application of advanced analytics can add value to the work of Tax and Customs authorities, by leveraging existing data in a robust and flexible way to r educe costs by better targeting cases for interventions. Analytics can thus greatly support the business to make better‑informed decisions.

 

Keywords: tax, predictive analytics, data mining, public sector, Ireland

 

Share |

Journal Article

The Challenges of Accelerating Connected Government and Beyond: Thailand Perspectives  pp183-202

Asanee Kawtrakul, Intiraporn Mulasastra, Tawa Khampachua, Somchoke Ruengittinun

© Dec 2011 Volume 9 Issue 2, ECEG, Editor: Frank Bannister, pp93 - 222

Look inside Download PDF (free)

Abstract

Key issues to make Thailand more dynamic, competitive and prepared for ASEAN economic integration are the implementation of Internal Smart with eGovernment, International Smart with intergovernmental processes and overcoming language barriers. As a first step towards internal smart or being a smart society, eGovernment has been implementing since 2000 in order to improve government services, transactions and interactions with citizens and business. Since 2007, the Ministry of Information and Communication Technology has been developing the Thailand eGovernment Interoperability Framework (TH e‑GIF) as guidelines for transformation to connected government. However, the transformation has been slow for six main reasons: lack of national data standards and standard governance body, lack of clear understanding about common processes across all involved stakeholders, lack of best practices and knowledge sharing in implementation, lack of data quality and data collection resources, lack of laws and regulations in data sharing and absence of a proactive mindset. The challenges are how to accelerate connected government and push forward to the connected ASEAN. This work focuses on three main activities: analyzing the gaps and prioritizing the need of information exchange, providing systematic approach for data standardization as well as developing a roadmap for moving towards a smart government with smart health, smart education, smart agriculture, smart tourism, smart trade and smart energy by 2015. Using best practices and the road map, the creation of connected government and connection to ASEAN can be pursued in a strategic and rapid manner. Moreover, secure e‑transactions with supportive laws, science, technologies and innovation are also key factors for ec onomic growth sustainability and community well‑being enhancement.

 

Keywords: data standardization, TH e-GIF, connected government, connected ASEAN, data landscape, information logistic, ontology based information exchange, connected government roadmap

 

Share |

Journal Article

E‑government Information Application: Identifying Smuggling Vessels with Data mining Technology  pp47-58

Chih-Hao Wena, Ping-Yu Hsu, Chung-Yung Wang, Tai-Long Wuc, Ming-Jia Hsu

© Oct 2012 Volume 10 Issue 1, Editor: Frank Bannister, pp1 - 94

Look inside Download PDF (free)

Abstract

In spite of the gradual increase in the number of academic studies on smuggling crime, focus is seldom placed on the application of data mining to crime prevention. This study provides deeper understanding and exploration of the benefits of information technology for the identification of smuggling crime. This study focuses on smuggling of vessels. The data source is the complete record of fishing vessels leaving and returning to ports in the Taiwan region. This paper essay applies both artificial neural networks (ANN) and logistics regression (LR) to classify and predict criminal behaviors in smuggling. At the same time, it shows the difference between ANN and human inspection (HI), also the difference between LR and HI. This study establishes models for vessels of different tonnage and operation purposes that can provide law enforcers with clearer judgment criteria. It is needed to construct different models for vessels to achieve the actual cases in the reality since smugglers will use different kinds of ship for different smuggling purposes. The study results show that the application of artificial neural networks to smuggling fishing vessels attains an average precision of 76.3%, and the application of logistic regression to smuggling fishing vessels can achieve an average precision of 60.5%, both of which are of significantly higher efficiency levels compared with the current human inspection (HI) method. This study suggests the value of using an artificial neural networks model to obtain good identification performance for different vessel types as well as average savings of 90.47% on the manpower loading. Information technology can greatly help to increase the probability of seizing smuggling fishing vessels. Nowadays, public administration information is saved electronically however is not employed well. In fact, it can increase the administrative efficiency by proper use of electronic data. In this study, for example, we expect better use of the data stored in the database to establish an identifying model of smuggling. Applying the automatic identification mechanism, it is useful to reduce the probability of smuggling crime.

 

Keywords: government information application, Crime data mining, smuggling predictions, artificial neural networks, logistic regression.

 

Share |

Journal Article

Socio‑technical Impediments of Open Data  pp156-172

Anneke Zuiderwijk, Marijn Janssen, Sunil Choenni, Ronald Meijer, Roexsana Sheikh Alibaks

© Dec 2012 Volume 10 Issue 2, ECEG, Editor: Frank Bannister, pp95 - 181

Look inside Download PDF (free)

Abstract

There is an increasing demand for opening data provided by public and private organisations. Various organisations have already started to publish their data and potentially there are many benefits to gain. However, realising the intended positive effects and creating value from using open data on a large scale is easier said than done. Opening and using data encounters numerous impediments which can have both a socio and a technical nature. Yet, no overview of impediments is available from the perspective of the open data user. Socio‑technical impediments for the use of open data were identified based on a literature overview, four workshops and six interviews. An analysis of these 118 impediments shows that open data policies provide scant attention to the user perspective, whereas users are the ones generating value from open data. The impediments that the open data process currently encounters were analysed and categorized in ten categories: 1) availability and access, 2) find ability, 3) usability, 4) understand ability, 5) quality, 6) linking and combining data, 7) comparability and compatibility, 8) metadata, 9) interaction with the data provider, and 10) opening and uploading. The impediments found in literature differ from impediments that were found in empirical research. Our overview of impediments derived from both literature and empirical research is therefore more comprehensive than what was already available. The comprehensive overview of impediments can be used as a basis for improving the open data process, and can be extended in further research. This will result in the solving of some impediments and new impediments might rise over time.

 

Keywords: open data, open government data, impediments, barriers, challenges, problems, user perspective.

 

Share |