Introduction: This paper presents concepts of Bernoulli distribution, and how it can be used as an approximation of Binomial, Poisson and Gaussian distributions with different approach from earlier existing literatures. Due to discrete nature of the random variable X, a more appropriate method of Principle of Mathematical Induction (PMI) is used as an alternative approach to limiting behavior of binomial random variable. The study proved de'Moivres Laplace theorem (convergence of binomial distribution to Gaussian distribution) to all values of p such that p p ? 0 and p ? 1 using a direct approach which opposes the popular and most widely used indirect method of moment generating function.
Introduction: The purpose of this work is to introduce a new iteration called the modified Picard-S-AK hybrid iterative scheme for approximating fixed point for Banach contractive maps. We show that our scheme converges to a unique fixed point p at a rate faster than the recent AK iterative scheme for Banach contractive maps. Furthermore, using Java programming language, we give some numerical examples to justify our claim. Stability and data dependence of the proposed scheme are also explored.
Introduction: Turbo codes play an important role in making communications systems more efficient and reliable. This paper provides a description of two turbo codes algorithms. Soft-output Viterbi algorithm and logarithmic-maximum a posteriori turbo decoding algorithms are the two candidates for decoding turbo codes. Soft-input soft-output (SISO) turbo decoder based on soft-output Viterbi algorithm (SOVA) and the logarithmic versions of the MAP algorithm, namely, Log-MAP decoding algorithm. The bit error rate (BER) performances of these algorithms are compared. Simulation results are provided for bit error rate performance using constraint lengths of K=3, over AWGN channel, show improvements of 0.4 dB for log-MAP over SOVA at BER 10 -4
Introduction: In this paper we describe a simulation study of the impact of vehicular traffic on the performance of cellular wireless network at microwave carrier frequencies above 3 GHz and up to 15 GHz, where the first and subsequent tiers co-channel interfering cells are active. The uplink information capacity of the cellular wireless network is used for the performance analysis. The simulation results show that vehicular traffic causes a decrease in the information apacity of a cellular wireless network. Results also show that for both light and heavy vehicular traffic environment, the nclusion of subsequent tier co-channel interferences caused a decrease of between 3 - 12% in the information capacity of the cellular wireless network as compared to the case, when only the first tier co-channel interferences are active.
Introduction: This paper presents an adaptive multiuser receivers scheme for MIMO OFDM over Turbo-Equalization for Single-Carrier Transmission. It involves the joint adaptive minimum mean square error multiuser detection and decoding algorithm with prior information of the channel and interference cancelation in the spatial domain. A partially filtered gradient LMS (Adaptive) algorithm is also applied to improve the convergence speed and tracking ability of the adaptive detectors with slight increase in complexity. The proposed technique is analyzed in a slow and fast Rayleigh fading channels in MIMO OFDM systems. The adaptive multiuser detection for MIMO OFDM (AMUD MIMO OFDM) perform as well as the iterative equalization for single-carrier.
Introduction: In this paper, mathematical analysis, supported by simulations is used to study the impact of base station antenna height on the performance of land mobile cellular network. The performance is evaluated in terms of the uplink information capacity of the cellular wireless network, when both the first six co-channel interfering cells (first tier) and those beyond it are considered to be dominant. It is shown that at microwave frequencies beyond 2 GHZ as the antenna height increases the area spectrum efficiency of the land mobile cellular network decreases.
Introduction: A number of reduced-complexity methods for turbo equalization have recently been introduced in which MAP equalization is replaced with sub optimal, low complexity approaches. In this paper, we investigate the inefficiency of the conventional decision feedback algorithm when higher-level modulation is used in Inter Symbol Interference channels. We estimates the data using the apriori information from the SISO channel decoder and also the a priori detected data from previous iteration to minimize error propagation. The proposed algorithm for 8-phase shift keying, 16-phase shift keying and 64 quadrature amplitude modulation was verified by assuming a statistical model for the a priori information combined with equalizer output to improve the interference cancellation. The simulation results show that the proposed low complexity DFE algorithm improves considerable compared to the conventional DFE when higher-level modulation are used. It was shown that at higher level modulation, the modified DEF has 1.6dB gain over the conventional DEF after seven iterations.
Introduction: A number of reduced-complexity methods for turbo equalization have recently been introduced in which MAP equalization is replaced with sub optimal, low complexity approaches. In this paper, we investigate the inefficiency of the conventional decision feedback algorithm when higher-level modulation is used in Inter Symbol Interference channels. We estimates the data using the apriori information from the SISO channel decoder and also the a priori detected data from previous iteration to minimize error propagation. The proposed algorithm for 8-phase shift keying, 16-phase shift keying and 64 quadrature amplitude modulation was verified by assuming a statistical model for the a priori information combined with equalizer output to improve the interference cancellation. The simulation results show that the proposed low complexity DFE algorithm improves considerable compared to the conventional DFE when higher-level modulation are used. It was shown that at higher level modulation, the modified DEF has 1.6dB gain over the conventional DEF after seven iterations.
Introduction: Reducing Impulse noise is a very active research area in communication systems. This paper presents a digital smear-desmear technique (SDT) applied to data transmission over band limited channels. A generalized set of filter design criteria based on minimizing the average bit error probability is introduced. The design criteria were applied to a practical digital filter implementing SDT techniques. The SDT is simulated and combined with coded communication systems for high data transmission rate. Simulation results show that the SDT yields a significant improvement in bit error rates for coded systems subject to impulse noise, relative to the systems with no SDT. The technique also completely removes the error floor caused by the impulse noise.
Introduction: The MMSE-DFE is a well established Intersymbol Interference (ISI) mitigating structure on linear, noisy, and dispersive channels. Time invariant communication channels exhibits ISI for severe frequency-selective channels. In this paper we investigate the iterative decoding with imperfect MMSE decision feedback equalizer using different modulation scheme. We assume that soft outputs from channel decoder are independent identically distributed Gaussian random variables with known mean and variance. The Imperfect MMSE DFE using different modulation outperforms other turbo equalization algorithms of similar computational complexity in terms of bit-error rate. The achieved improvement is up to 3dB for severe frequency-selective channels.
Introduction: The paper describes a digital smear-desmear technique (SDT) based on polyphase sequences with good autocorrelation properties. These sequences are applied to the design of digital smear/desmear filters and combined with Trellis-coded modulation (TCM) codes. The scheme has been investigated for 16-QAM and 64-QAM modulation. The impulse noise is modeled as a sequence of Poisson arriving delta functions with gaussian amplitudes. The impulse noise parameters are computed from experimental data. Results shows that the SDT filter design method yields a significant improvement in bit error rates as when subject to impulse noise, relative to systems with no SDT.
Introduction: Inter symbol interference (ISI) constitutes a major impediment to reliable communications in multipath channels. We proposed a low complexity soft feedback Interference canceller (SFEIC) that combines the equalizer outputs and a priori information to form more reliable estimates and perform successive interference cancellation. The receiver performs soft output decisions, achieved by a soft-input soft-output (SISO) detector and a SISO channel decoders, through an iterative process. Simulation results are presented showing that the proposed SFEIC BER perform well compared with MAP equalizer and outperform the conventional MMSE decision feedback equalizer. The results are presented for rate ½ turbo codes with Quadrature-phase shift keying (QPSK) modulation, transmitted over inter symbol interference (ISI) channel having severe frequency distortion. The performance is about 0.8dB gain over MMSE DFE equalizer at bit-error rate of 10 -5 .
Introduction: In this paper a prefiltering method is considered where an all pass filter is employed at the receiver before equalization to create a minimum phase overall impulse response. In the presence of ISI, the all pass filter concentrates the maximum symbol energy in the correct sampling instances and subsequently cancel the non causal precursor ISI by replacing the samples and channel by their minimum phase equivalent. The system performance attainable with the proposed equalization is determined for transmission with channel coding. The use of all pass filtering is beneficial to the performance of a communication receiver that operates in a dispersive multipath propagation environment and thereby improves capacity. Simulation results are given, which demonstrate that the proposed approach outperform the conventional decision feedback equalizer.
Introduction: There has been a demand for high data rate for wireless communication system devices, mainly mobile multimedia applications. This paper investigates the suitability of Turbo Equalization as a means of achieving low bit error rate in the future high data communication systems. Turbo equalization is an approach to coded data transmission over channels with Inter-symbol interference (ISI) which can yield additional improvement in bit error rate. The paper demonstrates that at higher modulation scheme using iterative equalization method, high date rate can be achieved. The performance evaluation shows that turbo equalization is beneficial for higher modulation and thereby increase data rate with a reasonable complexity.
Introduction: We propose a system for wireless video multicast/broadcast to mobile handheld devices. We exploit the latest achievements in video coding (scalable 3D wavelet-based video coder) and in channel coding low-complexity rate compatible punctured PUM-based turbo codes). In the case of severe channel conditions, additional Reed-Solomon coding is applied across the packets. The scalable bitstream is split into multiple frames which are encoded separately. Fast rate optimal algorithms are used to determine suboptimal source channel symbol allocation. Experimental results given for different video sequences and channel conditions indicate good performance achieved at low complexity.
Introduction: Abstract— The paper describes a digital smear-desmear
technique (SDT) based on polyphase multilevel sequences
of unlimited length with good autocorrelation properties.
A design procedure for digital implementation of SDT is
defined and sequences with power efficiency higher than
50% are generated. These sequences are applied to the
design of digital smear/desmear filters and combined with
uncoded and coded ITU-T V.150.1 communication systems.
The impulse noise is modeled as a sequence of Poisson
arriving delta functions with gaussian amplitudes. The
impulse noise parameters are computed from experimental
data. Simulation results shows that the SDT filter design
method yields a significant improvement in bit error rates
for both systems subject to impulse noise, relative to systems
with no SDT. The technique also completely removes the
error floor caused by impulse noise.
Introduction: Turbo coding, a forward error correcting coding (FEC) technique, has made near Shannon Limit performance possible when Iterative decoding algorithms are used. Intersymbol interference (ISI) is a major problem in communication systems when information is transmitted through a wireless channel. Conventional approaches implement an equalizer to remove the ISI, but significant performance gain can be achieved through joint equalization and decoding. In this thesis, the suitability of turbo equalization as a means of achieving low bit error rate for high data communication systems over channels with intersymbol interference was investigated. A modified decision feedback equalizer algorithm (DFE) that provides significant improvement when compared with the conventional DFE is proposed. It estimates the data using the a priori information from the SISO channel decoder and also a priori detected data from previous iteration to minimize error propagation. Investigation was also carried out with Iterative decoding with imperfect minimum mean square error (MMSE) decision feedback equalizer, assuming soft outputs from the channel decoder that are independent identically distributed Gaussian random variables. The prefiltering method is considered in this thesis, where an all-pass filter is employed at the receiver before equalization to create a minimum phase overall impulse response. The band limited channel suffers performance degradation due to impulsive noise generated by electrical appliances. This thesis analysed a set of filter design criteria based on minimizing the bit error probability of impulse noise using digital smear filter.
Introduction: Mathematical analysis, supported by computer simulations is used to find a theoretical limit to cell size reduction in cellular wireless network in a shadowed fading environment. Information capacity measure based on the uplink of cellular wireless network is used for the analysis. Results show that at higher microwave carrier frequencies and smaller cell radius some of the second tier co-channel interfering cells (co-channel cells after the first six co-channel cells) becomes active. This causes a decrease in the information capacity of the cellular wireless network. The results also show that the decrease in the information capacity becomes bigger as the carrier frequency increases and cell size radius decreases.
Introduction: Adaptive multiuser receivers scheme for MIMO OFDM over Iterative-Equalization for Single-Carrier Transmission, which we refer to as Iterative AMUD MIMO OFDM. It involves the joint iteration of the adaptive minimum mean square error multiuser detection and decoding algorithm with prior information of the channel and interference cancelation in the spatial domain. A partially filtered gradient LMS (Adaptive) algorithm isalso applied to improve the convergence speed and tracking ability of the adaptive detectors with slight increase in complexity. The proposed technique is analyzed in slow and fast Rayleigh fading channels in MIMO OFDM systems. The Adaptive Multiuser Detection for MIMO OFDM system (AMUD MIMO OFDM) performs as well as the iterative equalization for single-carrier for higher modulation scheme. The LMS algorithm and maximum a posterior (MAP) algorithm are utilized in the receiver …
Introduction: Adaptive multiuser receivers scheme for MIMO OFDM over Iterative-Equalization for Single-Carrier Transmission, which we refer to as Iterative AMUD MIMO OFDM. It involves the joint iteration of the adaptive minimum mean square error multiuser detection and decoding algorithm with prior information of the channel and interference cancelation in the spatial domain. A partially filtered gradient LMS (Adaptive) algorithm isalso applied to improve the convergence speed and tracking ability of the adaptive detectors with slight increase in complexity. The proposed technique is analyzed in slow and fast Rayleigh fading channels in MIMO OFDM systems. The Adaptive Multiuser Detection for MIMO OFDM system (AMUD MIMO OFDM) performs as well as the iterative equalization for single-carrier for higher modulation scheme. The LMS algorithm and maximum a posterior (MAP) algorithm are utilized in the receiver …
Introduction: This paper presents an adaptive vector precoding schemes for the downlink multiuser multiple input multiple-output orthogonal frequency division multiplexing systems (MIMO OFDM) with multiple data streams per user. The zf precoder requires pseudo inversion of the channel matrix, where this operation is only optimum when the transmitter power is unconstrained. This paper presents efficient methods to reduce the computational load of the algorithm by interpolating the precoding and decoding matrices corresponding to different OFDM subcarriers. In the feedback scenario, the precoder matrix has to be designed for all subcarriers and can get positively large. The precoded MIMO OFDM systems will adapt to change in channel characteristics due to the automatic updating. This approach yields an improvement to the bit error rate probability by approximately an order of magnitude as compared to the ZF approach, utilizing other channel decomposition techniques. In comparison to Moore-Penrose pseudo inverse an increase in the capacity of the MIMO OFDM, with less computational complexity is achieved.
Introduction: The robust delivery of video over emerging wireless networks poses many challenges due to the heterogeneity of access networks, the variations in streaming devices, and the expected variations in network conditions caused by interference and coexistence. The proposed approach exploits the joint optimization of a wavelet-based scalable video/image coding framework and a forward error correction method based on PUM turbo codes. The scheme minimizes the reconstructed image/video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the rate optimization technique and the statistics of the transmission channel.
Introduction: The need to accurately predict and make right decisions regarding crude oil price motivates the proposition of an alternative algorithmic method based on real-valued negative selection with variable-sized detectors (V-Detectors), by incorporating with fuzzy-rough set feature selection (FRFS) for predicting the most appropriate choices. The objective of this study is enhancing the performance of V-Detectors using FRFS for prices of crude oil. Applying FRFS serves to prune the number of features by retaining the most informative and critical features. The V-Detectors then trains and tests the features. Different radius values are applied for V-Detectors. Experimental outcome in comparison with established algorithms such as support vector machine, naïve bayes, multi-layer perceptron, J48, non-nested generalized exemplars, IBk, fuzzy-roughNN, and vaguely quantified nearest neighbor demonstrates that FRFS-V-Detectors is proficient and valuable for insightful knowledge on crude oil price. Thus, it can assist in establishing oil price market policies on the international scale.
Introduction: Mining agricultural data with artificial immune system (AIS) algorithms, particularly the clonal selection algorithm (CLONALG) and artificial immune recognition system (AIRS), form the bedrock of this paper. The fuzzy-rough feature selection (FRFS) and vaguely quantified rough set (VQRS) feature selection are coupled with CLONALG and AIRS for improved detection and computational efficiencies. Comparative simulations with sequential minimal optimization and multi-layer perceptron reveal that the CLONALG and AIRS produced significant results. Their respective FRFS and VQRS upgrades namely, FRFS-CLONALG, FRFS-AIRS, VQRS-CLONALG, and VQRS-AIRS, are able to generate the highest detection rates and lowest false alarm rates. Thus, gathering useful information with the AIS models can help to enhance productivity related to agriculture.
Introduction: The Real-Valued Negative Selection algorithms which are the focal point of this work generate their detector set based on the points of self data. Self data is regarded as the normal behavioural pattern of the monitored system. An anomaly in data alters the confidentiality and integrity of its content thereby causing a defect for making useful and accurate decisions. Therefore, to correctly detect such an anomaly, this study applies the real-valued negative selection with; fixed-sized detectors (RNSA) and variable-sized detectors (V-Detector) for classification and detection of anomalies. Classifier algorithms of Support Vector Machine (SVM) and K-Nearest Neighbour (KNN) are used for benchmarking the performances of the real-valued negative selection algorithms. Experimental results illustrate that RNSA and V-Detector algorithms are suitable for the detection of anomalies, with the SVM and KNN producing significant efficiency rates. It was also gathered that V-Detector yielded superior performances with relation to the other algorithms.
Introduction: This paper implements the real-valued negative selection with variable-sized detectors (V-Detectors) for projecting the right decision with respect to crude oil price. The Brent crude oil data is retrieved from US department of energy. Using varying radius values of the V-Detector, comparison in terms of detection rate and false alarm rate, with support vector machine, naïve bayes, multi-layer perceptron, J48, non-nested generalized exemplars, IBk, fuzzy-roughNN, and vaguely quantified nearest neighbor demonstrated that V-Detector is efficient and computationally effective. The experimental outcome can initiate international crude oil market policy making as the V-Detector is able to reach highest detection and lowest false alarm rates.
Introduction: Mining agricultural data with artificial immune system (AIS) algorithms, particularly the clonal selection algorithm (CLONALG) and artificial immune recognition system (AIRS) form the bedrock of this paper. A fuzzy-rough feature selection (FRFS) method is coupled with CLONALG and AIRS for improved detection and computational efficiency. Comparative simulations with sequential minimal optimization and multi-layer perceptron reveal that the CLONALG and AIRS produced significant results. Their respective FRFS upgrades namely; FRFS - CLONALG and FRFS - AIRS are able to generate highest detection rates and lowest false alarm rates. Thus, gathering useful information with the AIS models can help to enhance productivity related to agriculture.
Introduction: The ability of Negative Selection Algorithm (NSA) to solve a number of anomaly detection problems has proved to be effective. This paper thus presents an experimental study of negative selection algorithm with some classification algorithms. The purpose is to ascertain their efficiency rates in accurately detecting abnormalities in a system when tested with well-known datasets. Negative selection algorithm with some selected immune and classifier algorithms are used for experimentation and analysis. Three different datasets have been acquired for this task and a comparison performance executed. The empirical results illustrates that the artificial immune system of negative selection algorithm can achieve highest detection and lowest false alarm. Thus, it signifies the suitability and potentiality of NSA for discovering unusual changes in normal behavioral flow.
Introduction: The effect of climate change presents a huge impact on the development of a country. Furthermore, it is one of the causes in determining planning activities for the advancement of a country. Also, this change will have an adverse effect on the environment such as flooding, drought, acid rain and extreme temperature changes. To be able to avert these dangerous and hazardous developments, early predictions regarding changes in temperature and ozone is of utmost importance. Thus, neural network algorithm namely the Multilayer Perceptron (MLP) which applies Back Propagation algorithm (BP) as their supervised learning method, was adopted for use based on its success in predicting various meteorological jobs. Nevertheless, the convergence velocity still faces problem of multi layering of the network architecture. As consequence, this paper proposed a Functional Link Neural Network (FLNN) model which only has a single layer of tunable weight trained with the Modified Cuckoo Search algorithm (MCS) and it is called FLNN-MCS. The FLNN-MCS is used to predict the daily temperatures and ozone. Comprehensive simulation results have been compared with standard MLP and FLNN trained with the BP. Based on the extensive output, FLNN-MCS was proven to be effective compared to other network models by reducing prediction error and fast convergence rate.
Introduction: The effect of climate change presents a huge impact on the development of a country. Furthermore, it is one of the causes in determining planning activities for the advancement of a country. Also, this change will have an adverse effect on the environment such as flooding, drought, acid rain and extreme temperature changes. To be able to avert these dangerous and hazardous developments, early predictions regarding changes in temperature and ozone is of utmost importance. Thus, neural network algorithm namely the Multilayer Perceptron (MLP) which applies Back Propagation algorithm (BP) as their supervised learning method, was adopted for use based on its success in predicting various meteorological jobs. Nevertheless, the convergence velocity still faces problem of multi layering of the network architecture. As consequence, this paper proposed a Functional Link Neural Network (FLNN) model which only has a single layer of tunable weight trained with the Modified Cuckoo Search algorithm (MCS) and it is called FLNN-MCS. The FLNN-MCS is used to predict the daily temperatures and ozone. Comprehensive simulation results have been compared with standard MLP and FLNN trained with the BP. Based on the extensive output, FLNN-MCS was proven to be effective compared to other network models by reducing prediction error and fast convergence rate.
Introduction: Within the Artificial Immune System community, the most widely implemented algorithm is the Negative Selection Algorithm. Its performance rest solely on the interaction between the detector generation algorithm and matching technique adopted for use. Relying on the type of data representation, either for strings or real-valued, the proper detection algorithm must be assigned. Thus, the detectors are allowed to efficaciously cover the non-self space with small number of detectors. In this paper, the di_erent categories of detection generation algorithm and matching rule have been presented. Briey, the biologial and arti_- cial immune system, as well as the theory of negative selection algorithm were introduced. The exhaustive detector generation algorithm used in the original Negative Selection Algorithm laid the foundation at proferring other algorithmic methods based on set of rules in generating valid detectors for revealing anomalies.
Introduction: Within the Artificial Immune System community, the most widely implemented algorithm is the Negative Selection Algorithm. Its performance rest solely on the interaction between the detector generation algorithm and matching technique adopted for use. Relying on the type of data representation, either for strings or real-valued, the proper detection algorithm must be assigned. Thus, the detectors are allowed to efficaciously cover the non-self space with small number of detectors. In this paper, the di_erent categories of detection generation algorithm and matching rule have been presented. Briey, the biologial and arti_- cial immune system, as well as the theory of negative selection algorithm were introduced. The exhaustive detector generation algorithm used in the original Negative Selection Algorithm laid the foundation at proferring other algorithmic methods based on set of rules in generating valid detectors for revealing anomalies.
Introduction: Grid computing, a new and broad area of research, aims at sharing available information and resources through the use of computers over the network. To use the new applications of grid, it is necessary to adapt the modern software components and assembled information resources in a flexible format. Web services incorporate the necessary capabilities in achieving this goal called grid services. Due to the exponentially increasing amount of data, documents, resources and services available on the web, finding an acceptable agreement between the user and the abilities of web or grid service as well as forming an appropriate composition of service components for performing requested operation are critical issues. Measuring the similarity of services is an important and valuable solution that is used in some practical reasoning such as replacement of a service with another and combination of services and applications. Also, because the measuring the service similarity needs an appropriate semantic model, therefore, in this paper a semantic model based on OWL ontology language for services is presented and thus, similarity measure is provided. We find a semantic model for services and then provide a method for measuring the similarity between two services. A mathematical model for solving given problems is also proposed. The results evaluated by F1 measure obviously show the improvement of accuracy against previous method.
Introduction: The roles played by Enterprise Resource Planning
(ERP) systems in any organizations in achieving operational
excellence and competitive advantage cannot be
underestimated. However, the cost of implementing traditional
ERP has been observed to be a bane for most Small and
Medium Enterprises (SMEs) which are globally known as the
major drivers of most agile economies. Cloud computing is a
paradigm technology concept that affords the SMEs
opportunities of affordable services in which ERP can be
Cloud-hosted and rented on pay-per-use basis, which does not
require a great deal of initial capital to ensure business
continuity in a highly competitive market. There are a lot of
providers offering ERP as Software-as-a-Service (SaaS), the
SMEs therefore is being faced with the challenge of selecting a
provider with Quality of Service (QoS) suitable enough to meet
the customized requirements of the organizations. A model is
presented in this paper which seeks to address this selection
challenge. Apart from the suitability efforts, the model also
further attempts to select the cheapest among a few selected
providers already found suitable for the SMEs.
Introduction: Directory services facilitate access to information organized under a variety of frameworks and applications. The Lightweight Directory Access Protocol is a promising technology that provides access to directory information using a data structure similar to that of the X.500 protocol. IBM Tivoli, Novell, Sun, Oracle, Microsoft, and many other vendor features LDAP-based implementations. The technology’s increasing popularity is due both to its flexibility and its compatibility with existing applications. A directory service is a searchable database repository that lets authorized users and services find information related to people, computers, network devices, and applications. Given the increasing need for information — particularly over the Internet — directory popularity has grown over the last decade and is now a common choice for distributed applications. Lightweight Directory Access Protocol (LDAP) accommodates the need of high level of security, single sign-on, and centralized user management. This protocol offers security services and integrated directory with capability of storage management user information in a directory. Therefore at the same time the user can determine application, service, server to be accessed, and user privileges. It is necessary to realize files sharing between different operating systems in local area network. Samba software package, as the bridge across Windows and Linux, can help us resolve the problem. In this paper, we try to explore previous literature on this topic and also consider current authors work then come out with our views on the subject matter of discussion based on our understanding.
Introduction: The Information Technology world is developing so fast and it is been reported that Open Source tools will eventually take over proprietary tools in no to distant future. The Open Source Community is integrating its products with that of the proprietary ones and the integration of Windows machines into Linux network is evident of such practices. The purpose of this project is to implement Samba with OpenLDAP in a simulated environment. This implementation is conducted within a virtual environment by simulating the setup of Linux and Windows Operating systems by reducing physical setup of machines. Samba will act as an interface between Linux and Windows, files will be accessible to both server and client. OpenLDAP stores the user accounts and configuration files. A performance test carried out on Samba determining effect on CPU power and Memory usage shows a decrease in the CPU power and an increase in Memory usage.
Introduction: Enterprise Resource Planning (ERP) systems provided as a service in the cloud is offering immense assistance for the Small and Medium sized Enterprises (SMEs) with respect to cost effectiveness and affordability, to be able to compete favorably and fairly with their Large Enterprises (LE) counterparts who have the financial wherewithal to adopt the rather expensive traditional or on-premise ERP Systems. However, to be able to achieve such numerous benefits such as economies of scale, improved flexibility, reduced capital cost, improved accessibility, and so on, many service subscribers (tenants) often share the same remote physical infrastructure put in place by the cloud service providers (CSPs). This multi-tenancy concept however introduces a high level of security and privacy risks unique to cloud services, such as attacks from other consumers, who may be competitors or simply hackers, sharing the same infrastructure. Hence, it becomes a challenge for the SMEs to select a suitable CSP whose security and privacy mechanisms most meet the security requirements of the organizations. We critically appraise some selected frameworks for the selection of cloud computing and, more specifically cloud ERP providers in order to be able to identify dimensions and measure of selection. We compare the frameworks based on their components, criteria for selection in the approaches and the suitability for the SMEs. We discovered that there is a link between frameworks of evaluation, ranking and selection of cloud providers. In our review, we found out that current selection methods are complementary one to another in the sense that they select …
Introduction: Enterprise Resource Planning (ERP) systems automate and integrate business management activities thereby aiding the organization achieve operational excellence and competitive advantage. However, traditional ERP has been observed to be too costly for most Small and Medium Enterprises (SMEs) which are known to be the economic agents of rural development globally. Emergence of Cloud computing offers the SMEs opportunities of accessing cloud-hosted infrastructure to evade huge initial capital. As there are myriads of providers offering ERP as Software-as-a-Service (SaaS), there arises the challenge as per choosing a provider with Quality of Service (QoS) that would be suitable to meet the organizations’ customized requirements. The paper presents a model which not only seeks to address this challenge, the model goes a step further to select the most affordable among a selected few providers that …
Introduction: Enterprise Applications (EAs) aid the organizations achieve operational excellence and competitive advantage. Over time, most Small and Medium Enterprises (SMEs), which are known to be the major drivers of most thriving global economies, use the costly on-premise versions of these applications thereby making business difficult to competitively thrive in the same market environment with their large enterprise counterparts. The advent of cloud computing presents the SMEs an affordable offer and great opportunities as such EAs can be cloud-hosted and rented on a pay-per-use basis which does not require huge initial capital. However, as there are numerous Cloud Service Providers (CSPs) offering EAs as Software-as-a-Service (SaaS), there is a challenge of choosing a suitable provider with Quality of Service (QoS) that meet the organizations’ customized requirements. The proposed model takes care of that …
Introduction: Enterprise Resource Planning (ERP) systems automate and integrate business management activities thereby aiding the organization achieve operational excellence and competitive advantage. However, traditional ERP has been observed to be too costly for most Small and Medium Enterprises (SMEs) which are known to be the economic agents of rural development globally. Emergence of Cloud computing offers the SMEs opportunities of accessing cloud-hosted infrastructure to evade huge initial capital. As there are myriads of providers offering ERP as Software-as-a-Service (SaaS), there arises the challenge as per choosing a provider with Quality of Service (QoS) that would be suitable to meet the organizations’ customized requirements. The paper presents a model which not only seeks to address this challenge, the model goes a step further to select the most affordable among a selected few providers that are suitable for the SMEs to keep them agile and competitive.
Introduction: Enterprise Resource Planning (ERP) systems automate and integrate business management activities thereby aiding the organization achieve operational excellence and competitive advantage. However, traditional ERP has been observed to be too costly for most Small and Medium Enterprises (SMEs) which are known to be the economic agents of rural development globally. Emergence of Cloud computing offers the SMEs opportunities of accessing cloud-hosted infrastructure to evade huge initial capital. As there are myriads of providers offering ERP as Software-as-a-Service (SaaS), there arises the challenge as per choosing a provider with Quality of Service (QoS) that would be suitable to meet the organizations’ customized requirements. The paper presents a model which not only seeks to address this challenge, the model goes a step further to select the most affordable among a selected few providers that are suitable for the SMEs to keep them agile and competitive.
Introduction: Enterprise Resource Planning (ERP) systems automate and integrate business management activities thereby aiding the organization achieve operational excellence and competitive advantage. However, traditional ERP has been observed to be too costly for most Small and Medium Enterprises (SMEs) which are known to be the economic agents of rural development globally. Emergence of Cloud computing offers the SMEs opportunities of accessing cloud-hosted infrastructure to evade huge initial capital. As there are myriads of providers offering ERP as Software-as-a-Service (SaaS), there arises the challenge as per choosing a provider with Quality of Service (QoS) that would be suitable to meet the organizations’ customized requirements. The paper presents a model which not only seeks to address this challenge, the model goes a step further to select the most affordable among a selected few providers that are suitable for the SMEs to keep them agile and competitive.
Introduction: In this paper, we extend variational iteration method (VIM) to find approximate solutions of linear and nonlinear thirteenth order differential equations in boundary value problems. The method is based on boundary valued problems. Two numerical examples are presented for the numerical illustration of the method and their results are compared with those considered by [1,2]. The results reveal that VIM is very effective and highly promising in comparison with other numerical methods.
Introduction: This paper presents a robust signature verification and forgery detection system using fuzzy modeling
technique. The features of various handwritten signatures are sampled with proper analysis and
encapsulated to devise an effective verification system. Grid method was used to extract features angles for
detention of forgeries and verification of genuine signatures. Exponential membership function was used to
fuzzified the derived functions, and modified into structural parameters suitable to adapt to any possible
variations that may result from handwriting styles and also to reflect any other factors due to scripting of a
signature. The proposed system is tested on a large database of signatures obtained from 40 subjects.
Introduction: In this information age knowledge acquired is always spread through the web. Currently on the exponential growth of the
available information on the social media interface (for example every minute on the internet over 50 million information are
upload and over 50 hours of videos are uploaded on YouTube alone) and therefore it is physiologically impossible to follow the
information flow in real time hence make it more difficult for some country to thrive in area of research with a foreign
languages. These foreign languages therefore brings numerous perils, including death which may befall a language and render
its growth or relevance somewhat useless, lending to the ‘birth’ of a population with little regard for their own languages and
therefore slow down the technology advancement of the nation. This research work extracts factors responsible for Yoruba
Language in a state being annihilated. Questionnaires were administered and returned to generated dataset from the responses
and Gain Ratio and Relief f techniques from Weka 3.7.9 while a threshold was set for each technique and the common was
responsible for Yoruba Language annihilation. The result showed that lack of commitment to indigenous language use, Colonia
legacy and in-effective language planning by the government are the factors that are mostly responsible for Yoruba language
annihilation. These factors were also responsible for slow development of technology of a nation.
Introduction: Using survey design, this study investigated bank customers’ confidence with electronic
payment channels services in Abuja Municipal Area Council (AMAC), a Nigerian
municipality. Structured questionnaires were used to collect data and were analysed
using descriptive statistical method. The study shows that, ease of use of electronic
payment system platforms is a major factor with online banking activities. Excellent
customer service is the primary reason for maintaining bank relationship. Based on the
findings of the study, recommendations were made on how to further strengthen bank
customers’ confidence towards electronic payment systems.
Introduction: Internet Traffic Engineering is defined as that aspect of Internet network engineering
dealing with the issue of performance evaluation and optimization of operational IP
networks. Traffic Engineering encompasses the application of technology and scientific
principles to the measurement, characterization, modeling, and control of Internet traffic.
Enhancing the performance of an operational network, at both traffic and resource levels,
are major objectives of Internet engineering. Traffic oriented performance include packet
transfer delay, packet delay variation, packet loss, and throughput. Packet transfer delay
is a concept in packet switching technology. The sum of store-and-forward delay that a
packet experiences in each router gives the transfer or queuing delay of that packet
across the network. Packet transfer delay is influenced by the level of network
congestion and the number of routers along the way of transmission. Packet loss occurs
when one or more packets of data travelling across a computer network fail to reach their
destination. Packet loss is distinguished as one of the three main error types encountered
in digital communications; the other two being bit error and spurious packets caused due
to noise. The fraction of lost packets increases as the traffic intensity increases. Therefore,
performance at a node is often measured not only in terms of delay, but also in terms of
the probability of packet loss.
Introduction: The concept of data mining, which has become a promising field because of its applications in various fields, is used in this work for strategic equipment maintenance in telecommunication network. Efficient maintenance of equipment in large telecommunication networks is difficult and complex. Thus, there is need for intelligent systems to guide engineers in proper maintenance of equipment in telecommunication network installations. The idea is to apply modified Apriori frequent pattern algorithm to mine frequent faulty events and identity association rules among items in telecommunication network alarm database. We implemented the novel approach on faulty events database of a GSM company in Nigeria. The result shows a promising tool that is much needed for intelligent decision-making in equipment maintenance in telecommunication installations.
Introduction: An analysis of Ayo is presented in this paper. The game is briefly described and it is shown that myopic decision guarantees solution to the game. The “odu” concept is discussed and it is shown that this strategy does not alter the solution methodology applied to solve the game without it. It is shown that the payoff matrix does not always have a saddle-point. Nevertheless, solution exists.
Introduction: Decision making plays an important role in the life of every living creature. Virtually on daily basis, people must make one or more decision. A faulty decision can lead to defeat in any competition. This paper presents the process of making decisions on the basis of knowledge of game playing as a major key in defining human characteristics. We simulated Ayo game playing on a digital computer and empirically evaluated the behavior of the prototype simulation. Empirical judgment was carried out on how experts play Ayo game as a means of evaluating the performance of the heuristics used to evolve the Ayo player in the simulation. A paper-based questionnaire was designed and administered to the Ayo game players which were used for the assessments of players’ perceptions of the prototype simulation, which gives room for statistical interpretation. This projects a novel means of solving the problem of decision making in move selections in computer game-playing of Ayo game.
Introduction: Researchers have used many techniques in designing intrusion detection systems (IDS) and yet we still do not have an effective IDS. The interest in this work is to combine techniques of data mining and expert systems in designing an effective anomaly?based IDS. Combining methods may give better coverage, and make the detection more effective. The idea is to mine system audit data for consistent and useful patterns of user behaviour, and then keep these normal behaviours in profiles. An expert system is used as the detection system that recognizes anomalies and raises an alarm. The evaluation of the intrusion detection system design was carried out to justify the importance of the work.
Introduction: The goal of our work is to discuss the fundamental issues of privacy and anomaly?based intrusion detection systems (IDS) and to design an efficient anomaly?based intrusion IDS architecture where users' privacy is maintained.
Introduction: This paper describes an approach to visualizing concurrency control (CC) algorithms for real-time database systems (RTDBs). This approach is based on the principle of software visualization, which has been applied in related fields. The Model-View-controller (MVC) architecture is used to alleviate the black box syndrome associated with the study of algorithm behaviour for RTDBs Concurrency Controls. We propose a Visualization "exploratory" tool that assists the RTDBS designer in understanding the actual behaviour of the concurrency control algorithms of choice and also in evaluating the performance quality of the algorithm. We demonstrate the feasibility of our approach using an optimistic concurrency control model as our case study. The developed tool substantiates the earlier simulation-based performance studies by exposing spikes at some points when visualized dynamically that are not observed using usual static graphs. Eventually this tool helps solve the problem of contradictory assumptions of CC in RTDBs.
Introduction: Decision making plays an important role in the life of every living creature. Virtually on daily basis, people must make one or more decision. A faulty decision can lead to defeat in any competition. This paper presents the process of making decisions on the basis of knowledge of game playing as a major key in defining human characteristics. We simulated Ayo game playing on a digital computer and empirically evaluated the behavior of the prototype simulation. Empirical judgment was carried out on how experts play Ayo game as a means of evaluating the performance of the heuristics used to evolve the Ayo player in the simulation. A paper-based questionnaire was designed and administered to the Ayo game players which were used for the assessments of players’ perceptions of the prototype simulation, which gives room for statistical interpretation. This projects a novel means of solving the problem of decision making in move selections in computer game-playing of Ayo game.
Introduction: The major goal in defining and examining game scenarios is to find good strategies as solutions to the game. A plausible solution is a recommendation to the players on how to play the game, which is represented as strategies guided by the various choices available to the players. These choices invariably compel the players (decision makers) to execute an action following some conscious tactics. In this paper, we proposed a refinement-based heuristic as a machine learning technique for human-like decision making in playing Ayo game. The result showed that our machine learning technique is more adaptable and more responsive in making decision than human intelligence. The technique has the advantage that a search is astutely conducted in a shallow horizon game tree. Our simulation was tested against Awale shareware and an appealing result was obtained.
Introduction: In playing Ayo game, both opening and endgames are often stylized. The opening is very interesting with both players showing skills by the speed of their movements. However, there exists an endgame strategy in Ayo game called Completely Determined Game (CDG) such that its usefulness for ending a game should be apparent. In this paper, we present the CDG as a class of endgame strategy and describe its configuration and detailed analysis of its winning positions that generates integer sequence, and some self-replicating patterns
Introduction: The success or otherwise of Software Engineering (SE) activities depends on the interactions
among software engineers. Consequently, effective interactions depend largely on personality
traits, which is a consistent and long-lasting tendency in behaviour. In psychology, five major
trait factors (The Big Five Factors) have been generally used to assess personality of people. But,
these might not be adequate in SE because of the required technical and cognitive skills. In this
work, we first present Cognitive Ability as an additional factor that must be measured in order to
adequately assess personality in SE. A research survey was conducted in order to capture personality requirements in SE. Based on the result of the survey conducted, we develop a model for
assessing personality traits in SE. We then design an assessment technique that is based on responses to some well-structured and deductive on-line questions. The implementation of the
model using Visual Basic resulted in a much-needed tool that can guide intending software engineers in choosing area of specialization in SE based on their personality traits.
Introduction: From queuing system approach, a compute intensive application of a single processor computer system can be defined as any application of a single processor computer system where the arrival rate of processes into the processor queue is greater than the departure rate of the processes from the processor. on the other hand, a non-compute intensive application can be defined as any application where the arrival rate is less than the departure rate. A single processor computer system can be used for compute intensive and non-compute intensive applications. In a compute intensive application, the processor is busy most of the time because there is always job to be executed. This paper, therefore, aims at using a novel and efficient queuing approach to model some of the performance metrics of compute intensive applications of a single processor computer system.
Introduction: The XML standard adopted for the interoperability among Web services had played prominent role in enhancing systems/application integration and also facilitates linking up clientpsilas demands through loosely couple platform. However, the use of this technology has a major disadvantage of lacking in semantic representations of the contents in the Web and makes the service discovery scheme of the UDDI inefficient. Adding semantic features to Web services has a long role to play in facilitating automatic publishing, discovery, composition, and usage of these numerous services available on the Web which are dynamically published and discovered. In this paper, we applied the use of semantic registry-based system meant for personalizing car Web services' discovery. We maintained an OWL-based ontology for the car service together with a catalogue that keeps track of the published services for easy discovery. The semantic registry-based approach enhances the service discovery and guarantees quality of service by ensuring satisfaction of userpsilas requests which Web service personalization is meant to achieve.
Introduction: Intrusion detection systems become very important computer security mechanisms as computer break-ins are getting more common everyday. Intrusion detection system (IDS) monitors computers and networks for any set of actions that attempt to compromise the integrity, confidentiality or availability of computer resources. The goal of this paper is to discuss the fundamentals of IDS and to create awareness on why IDS should be embraced. A users study was carried out to understand the perceptions of individuals: organisations and companies on the use of IDS. Summary of problems of en/rent IDS designs and the challenges ahead are presented. We also look tit what should be the new approaches or future directions m IDS design so as to eliminate these shortcomings.
Introduction: This paper presents dynamic model of COVID-19 and citizens reaction from a fraction of the population in Nigeria using fractional derivative. We consider the reported cases from February to June 2020 using fractional derivative. The Stability analysis of the model was carried out and the basic reproduction number was calculated via the next generation matrix. The fractional derivative model was solved numerically and many graphs presented accordingly which could serve as a yard-stick of reducing this menacing virus and policy making.
Introduction: This paper presents dynamic model of COVID-19 and citizens reaction from a fraction of the population in Nigeria using fractional derivative. We consider the reported cases from February to June 2020 using fractional derivative. The Stability analysis of the model was carried out and the basic reproduction number was calculated via the next generation matrix. The fractional derivative model was solved numerically and many graphs presented accordingly which could serve as a yard-stick of reducing this menacing virus and policy making.
Introduction: The OpenFOAM computational fluid dynamics code was used to investigate the performance of three combustion models, namely, Muppala, Zimont and Algebraic. The performance characteristics of these models were tested in a fully premixed (stoichiometric (?=1.0)) modern, high-performance 4-valve, iso-octane, dual overhead cam (DOHC) engine with quasi-symmetric pent roof combustion chamber running at 1500 revolutions per minute. The performance characteristics of the three combustion models were found to be reasonably representative of measurement data that are usually observed in internal combustion engine test bed experiments. The combined or overall duration ??o, of the flame development ??d and propagation period ??b? for the three models were 5.53 ms, 5.69 ms and 6.04 ms for the Muppala, Zimont and Algebraic models, respectively, and these values correspond to 49.75?CA, 51.175 CA and 54.425 CAwhich are well within the 30???o?90 acceptable degree crank angle range during typical internal combustion engine operations. The maximum pressure with the corresponding degree crank angle positions at which they occur for the Muppala, Zimont and Algebraic models were found to be: (59.86bar,6.83?CA), (53.19bar,11.9?CA) and (47.64bar,12.1?CA) respectively. Furthermore, the results from the present study show that the flame-development period ??d, was almost identical for the three combustion models, whereas the rapid-burning interval ??b, was almost indistinguishable for the three combustion models up to about the 80% mark of the regress variable. In specific terms, the complete salient findings from this study are summarised in the conclusion section of this study.
Introduction: Abstract
Both the conventional (classic) and advanced exergetic analytical methods were used to investigate the improvement potentials of a pulverised coal-fired 750 MW supercritical steam power plant. The results show that the subsystem that contributed the most to exergy destruction is the condenser (CND) at about 1.25% and closely followed by the boiler at 1.23%. Furthermore, the data obtained from the present study show that the dominant contributor to the overall exergy destruction in each of the subsystems was the endogenous part of exergy destruction, E ?_(D,k)^(E,N). In addition, the results show that steam turbines, especially TB1, TB4, TB5, TB6, TB7 to TB17, the condenser and the boiler would benefit immensely by improving their avoidable endogenous exergy destructions E ?_(D,k)^(AV,EN). This investigation also shows that the overall unavoidable exergy destruction within the entire power plant was about 42.8%, while the potential for improving the entire power plant was approximately 2.5%. Other important results from this study are comprehensibly documented in the conclusion section.
Introduction: The OpenFOAM computational fluid dynamics code was used to investigate the performance of three combustion models, namely, Muppala, Zimont and Algebraic. The performance characteristics of these models were tested in a fully premixed (stoichiometric (?=1.0) modern, high-performance 4-valve, iso-octane, dual overhead cam (DOHC) engine with quasi-symmetric pent roof combustion chamber running at 1500 revolutions per minute. The performance characteristics of the three combustion models were found to be reasonably representative of measurement data that are usually observed in internal combustion engine test bed experiments. The combined or overall duration ??_o, of the flame development ??_d, and propagation period ??_(b^' ), for the three models were 5.53 ms, 5.69 ms and 6.04 ms for the Muppala, Zimont and Algebraic models, respectively, and these values correspond to 49.75 °CA, 51.175°CA and 54.425 °CA which are well within the 30? ??_o?90 acceptable degree crank angle range during typical internal combustion engine operations. The maximum pressure with the corresponding degree crank angle positions at which they occur for the Muppala, Zimont and Algebraic models were found to be: (59.86 bar, 6.83°CA), (53.19 bar, 11.9 °CA) and (47.64 bar,12.1°CA) respectively. Furthermore, the results from the present study show that the flame-development period ??_d, was almost identical for the three combustion models, whereas the rapid-burning interval ??_(b^' ) was almost indistinguishable for the three combustion models up to about the 80% mark of the regress variable. In specific terms, the complete salient findings from this study are summarised in the conclusion section of this study.
Introduction: Abstract
Both the conventional (classic) and advanced exergetic analytical methods were used to investigate the improvement potentials of a pulverised coal-fired 750 MW supercritical steam power plant. The results show that the subsystem that contributed the most to exergy destruction is the condenser (CND) at about 1.25% and closely followed by the boiler at 1.23%. Furthermore, the data obtained from the present study show that the dominant contributor to the overall exergy destruction in each of the subsystems was the endogenous part of exergy destruction, E ?_(D,k)^(E,N). In addition, the results show that steam turbines, especially TB1, TB4, TB5, TB6, TB7 to TB17, the condenser and the boiler would benefit immensely by improving their avoidable endogenous exergy destructions E ?_(D,k)^(AV,EN). This investigation also shows that the overall unavoidable exergy destruction within the entire power plant was about 42.8%, while the potential for improving the entire power plant was approximately 2.5%. Other important results from this study are comprehensibly documented in the conclusion section.
Introduction: Abstract
A spark ignition engine two-zone simulation code was used to conduct a systematic study of the effects of combustion chamber wall temperature, start of heat release/ignition timing, fuel–air equivalence ratio, engine speed, indicated mean effective pressure and exhaust gas recirculation on an individual basis on NO_X emissions in a 5.734-liter, V8 spark ignition engine. The two-zone model which incorporates heat transfer, blow-by and other losses was used. In this model, the flame traverses the charge resulting in burned and unburned zones. The unburned zone contains the reactants (fuel and air), and there is no reaction between the constituents. The burned zone consists of the products of combustion and dissociation. The formation of nitric oxide was obtained using the extended Zeldovich nitric oxide reaction mechanism. The simulation program computes both equilibrium thermodynamics and rate-limited values of the concentration of oxides of nitrogen (NO_X) in parts per million (ppm) as a function of crank angle as well as its concentration in the exhaust gas stream. The study shows that the equilibrium NO is formed immediately after the start of combustion, and because of its strong dependence on temperature it rises rapidly to a maximum value of 8454.75 ppm, and declines rapidly as the pressure and temperature fall during expansion stroke to a final NO_X concentration value of 28.7 ppm. The results also show that spark retard and exhaust gas recirculation are effective strategies for NO_X emission control. The complete results from this study are summarized in the conclusion.
Introduction: Abstract
A spark ignition engine two-zone simulation code was used to conduct a systematic study of the effects of combustion chamber wall temperature, start of heat release/ignition timing, fuel–air equivalence ratio, engine speed, indicated mean effective pressure and exhaust gas recirculation on an individual basis on NO_X emissions in a 5.734-liter, V8 spark ignition engine. The two-zone model which incorporates heat transfer, blow-by and other losses was used. In this model, the flame traverses the charge resulting in burned and unburned zones. The unburned zone contains the reactants (fuel and air), and there is no reaction between the constituents. The burned zone consists of the products of combustion and dissociation. The formation of nitric oxide was obtained using the extended Zeldovich nitric oxide reaction mechanism. The simulation program computes both equilibrium thermodynamics and rate-limited values of the concentration of oxides of nitrogen (NO_X) in parts per million (ppm) as a function of crank angle as well as its concentration in the exhaust gas stream. The study shows that the equilibrium NO is formed immediately after the start of combustion, and because of its strong dependence on temperature it rises rapidly to a maximum value of 8454.75 ppm, and declines rapidly as the pressure and temperature fall during expansion stroke to a final NO_X concentration value of 28.7 ppm. The results also show that spark retard and exhaust gas recirculation are effective strategies for NO_X emission control. The complete results from this study are summarized in the conclusion.
Introduction: Abstract
A validated spark ignition engine combustion model was used to investigate the operational feasibility of subjecting a 5.734 L, V8 engine to the Honda variable (valve) timing and electronic lift control (VTEC) management strategy. The numerical results were found to be in good agreement with measured data during the compression and expansion strokes. However, the results of the numerical model overestimated the measured data by approximately 10% -15% during the combustion phase. The reason for this discrepancy could be due to the fact that the turbulence levels within the combustion chamber during the experiment was not specified. Therefore, we accounted for turbulence in the simulation with a flame speed multiplying factor, f_t=7.0. The investigations cover operations at low, mid-range and full power periods; for fuel-air equivalence ratios, ? of 0.67, 1.10 and 1.18. The results of the study show that operational viability of a 5.734 L, V8 engine when subjected to the VTEC engine management scheme was achievable, provided high levels of turbulence were maintained, especially for extra lean mixtures, that is, for fuel-air equivalence ratio, ?=0.67. This work shows that the values of the flame speed factor, ftft have to be carefully selected in order to achieve complete combustion.
Introduction: Abstract
A computational study was used to investigate the effects of temperature and pressure on the extended Zeldovich mechanism. Two temperatures, namely 2,600 and 1,900 K, were used while keeping the species concentration of all component products constant at 1.0 mole/m3. These temperatures were selected because they represent typical operating conditions for internal combustion engines. Pressure was varied by decreasing the species concentration for all component species by 10% while keeping the temperatures constant. The pressures were 1.513 x 105 and 1.362 x 105 N/m2 for T=2,600 K and 1.106 x 105 and 9.95 x 104 N/m2 for T=1,900 K, 1.513 x 105 and 1.362 x 105 N/m2 for T=2,600 K and 1.106 x 105 and 9.95 x 104 N/m2 for T=1,900 K. The estimate of the uncertainties in the global errors of [NO], [N], and [O] for the conditions investigated was found to vary between (±4.10 and ±4.34%), (±9.33 and ±10.1%), and (±5.73 and ±9.07%), (±4.10 and ±4.34%), (±9.33 and ±10.1%), and (±5.73 and ±9.07%), respectively. The numerical results show that high temperatures result in faster rate of production of both [NO] and [O] and faster rate of [N] depletion. At moderately low temperatures (?1,900 K), the rate of depletion of both oxygen and nitrogen atoms is very high. It was observed that changes in pressure had a minor or at best marginal influence on the production of [NO], [N], and [O]. The work also shows that for all the equilibrium points investigated, only one in each case was physical and the others were non-physically realizable because at least one of their equilibrium points was negative.
Introduction: Abstract
A computational study was used to investigate the effects of temperature and pressure on the extended Zeldovich mechanism. Two temperatures, namely 2,600 and 1,900 K, were used while keeping the species concentration of all component products constant at 1.0 mole/m3. These temperatures were selected because they represent typical operating conditions for internal combustion engines. Pressure was varied by decreasing the species concentration for all component species by 10% while keeping the temperatures constant. The pressures were 1.513 x 105 and 1.362 x 105 N/m2 for T=2,600 K and 1.106 x 105 and 9.95 x 104 N/m2 for T=1,900 K, 1.513 x 105 and 1.362 x 105 N/m2 for T=2,600 K and 1.106 x 105 and 9.95 x 104 N/m2 for T=1,900 K. The estimate of the uncertainties in the global errors of [NO], [N], and [O] for the conditions investigated was found to vary between (±4.10 and ±4.34%), (±9.33 and ±10.1%), and (±5.73 and ±9.07%), (±4.10 and ±4.34%), (±9.33 and ±10.1%), and (±5.73 and ±9.07%), respectively. The numerical results show that high temperatures result in faster rate of production of both [NO] and [O] and faster rate of [N] depletion. At moderately low temperatures (?1,900 K), the rate of depletion of both oxygen and nitrogen atoms is very high. It was observed that changes in pressure had a minor or at best marginal influence on the production of [NO], [N], and [O]. The work also shows that for all the equilibrium points investigated, only one in each case was physical and the others were non-physically realizable because at least one of their equilibrium points was negative.
Introduction: Abstract
An experimental study based on Laser Doppler Velocimetry (LDV) measurements was used to investigate the effects of piston geometries and intake swirl levels on the structure of flow quantities at the top dead center (TDC) position of a simulated internal combustion engine (ICE). Rapid intake and compression machine (RICM) was used for this study. The two combustion chamber types investigated were the bowl-in-piston and the re-entrant bowl geometries. The experimental results show that turbulence is produced around the TDC position in the high-shear regions formed near the piston bowl rim. These high-shear regions were established by the interaction of squish and swirl. The results also show that the intake-generated turbulence has significantly decreased such that its influence on turbulence at TDC was negligible. In general, it was observed that turbulence around TDC was more intense in the re-entrant bowl under all swirl conditions than in the bowl-in-piston combustion chamber. The results of the data from this study show that the random uncertainties in the mean radial and tangential velocities for both chamber configurations ranges from ±13.2% to ±19.2%, while the uncertainties in the root mean square velocities were about ±14.1%.
Introduction: Abstract
An experimental study based on Laser Doppler Velocimetry (LDV) measurements was used to investigate the effects of piston geometries and intake swirl levels on the structure of flow quantities at the top dead center (TDC) position of a simulated internal combustion engine (ICE). Rapid intake and compression machine (RICM) was used for this study. The two combustion chamber types investigated were the bowl-in-piston and the re-entrant bowl geometries. The experimental results show that turbulence is produced around the TDC position in the high-shear regions formed near the piston bowl rim. These high-shear regions were established by the interaction of squish and swirl. The results also show that the intake-generated turbulence has significantly decreased such that its influence on turbulence at TDC was negligible. In general, it was observed that turbulence around TDC was more intense in the re-entrant bowl under all swirl conditions than in the bowl-in-piston combustion chamber. The results of the data from this study show that the random uncertainties in the mean radial and tangential velocities for both chamber configurations ranges from ±13.2% to ±19.2%, while the uncertainties in the root mean square velocities were about ±14.1%.
Introduction: Abstract
Numerical simulation studies were used to investigate the effects of piston geometries and intake swirl levels on the structure and evolution of the flow field at the top dead center (TDC) position of a simulated internal combustion engine (ICE). The piston shapes investigated were disc-shaped chamber, bowl-in-piston, and the re-entrant bowl geometries. The conditions considered are similar to those of production engines. The work focused on the near-wall and bowl entrance regions of the axisymmetric combustion chambers where strong swirl–squish interactions take place. The numerical calculations were performed with KIVA-II code. The k?? turbulence model was used in the present study. The results of this study show that high-shear regions and the consequent turbulence production occurred near the bowl entrance around the TDC of compression. Furthermore, this study shows that the re-entrant bowl piston configuration generated more turbulence quantities than both the disc-shaped and bowl-in-piston combustion chambers
Introduction: Abstract
A spark-ignition engine simulation code was used to study the effects of varying the following engine operating parameters—compression ratio, fuel–air equivalence ratio, residual mass fraction, and start of heat release/ignition timing on an individual basis on the performance of a 5.734 L, V-8 spark-ignition engine. The two-zone model was used where the same traverses the charge resulting in burned and unburned zones. The unburned zone contains the reactants (fuel and air), and there is no reaction between the constituents. The burned zone consists of the products of combustion and dissociation. The results of the present work show that maximum pressure and temperature occur at fuel–air equivalence ratio, ? of 1.01. Furthermore, the study shows that retarding or advancing the ignition timing from maximum brake torque causes a reduction in the power output of the cycle (indicated mean effective pressure) and hence in the cycle thermal efficiency as well. In general, it was observed that the computed results under estimated the measured values of the indicated mean effective pressures as follows: at 0.7? ??1.0, the computed results were between 5 and 6.86% lower than the measured engine data even though the qualitative trend was in excellent agreement with it, whereas the values of the measured indicated mean effective pressure were about 6.86–16.51% higher than the simulated results for fuel–air equivalence ratio in the range 1.0 ? ??1.4. The data reported indicate that in the range 0.7? ??1.0 the indicated thermal efficiency, ?, increases, whereas the indicated thermal efficiency has an approximate inverse relationship with the fuel–air equivalence ratio, ?, that is, ? ~ 1?? in the range 1.0 ? ??1.4. The other results from this study are summarized in the conclusion.
Introduction: Abstract
A spark-ignition engine simulation code was used to study the effects of varying the following engine operating parameters—compression ratio, fuel–air equivalence ratio, residual mass fraction, and start of heat release/ignition timing on an individual basis on the performance of a 5.734 L, V-8 spark-ignition engine. The two-zone model was used where the same traverses the charge resulting in burned and unburned zones. The unburned zone contains the reactants (fuel and air), and there is no reaction between the constituents. The burned zone consists of the products of combustion and dissociation. The results of the present work show that maximum pressure and temperature occur at fuel–air equivalence ratio, ? of 1.01. Furthermore, the study shows that retarding or advancing the ignition timing from maximum brake torque causes a reduction in the power output of the cycle (indicated mean effective pressure) and hence in the cycle thermal efficiency as well. In general, it was observed that the computed results under estimated the measured values of the indicated mean effective pressures as follows: at 0.7? ??1.0, the computed results were between 5 and 6.86% lower than the measured engine data even though the qualitative trend was in excellent agreement with it, whereas the values of the measured indicated mean effective pressure were about 6.86–16.51% higher than the simulated results for fuel–air equivalence ratio in the range 1.0 ? ??1.4. The data reported indicate that in the range 0.7? ??1.0 the indicated thermal efficiency, ?, increases, whereas the indicated thermal efficiency has an approximate inverse relationship with the fuel–air equivalence ratio, ?, that is, ? ~ 1?? in the range 1.0 ? ??1.4. The other results from this study are summarized in the conclusion.
Introduction: Abstract
The theory of thermoeconomics and local optimization were used to investigate how the cost of the resources consumed by a 450 MW power plant varies with the unit cost of the resources consumed, the technical production coefficients of the productive structure and/or the external demand for the products. In order to accomplish this, the costs of exergy of the productive structure were analyzed under three different conditions by using the relevant characteristic equations. In general, it was found that the thermoeconomic cost of a flow consists of two parts, namely the monetary cost of the fuel exergy (natural gas in the present study) needed to produce the flow, that is, its thermoeconomic cost and the costs due to the productive process (cost of capital equipment, maintenance, etc.). The results show that the steam leaving the boiler has the lowest exergy cost, while the condenser has the highest. The sequential quadratic programming (SQP) algorithm was used to obtain the optimized solutions of each major component of the plant. It was found that substantial operational and capital cost benefits were realized by optimizing most of the major plant equipment (boiler, turbines, feedwater heaters and the pumps). However, optimization of the condenser did not yield any cost benefit in capital equipment cost, but did produce some savings in operational cost.
Introduction: Abstract
The theory of thermoeconomics and local optimization were used to investigate how the cost of the resources consumed by a 450 MW power plant varies with the unit cost of the resources consumed, the technical production coefficients of the productive structure and/or the external demand for the products. In order to accomplish this, the costs of exergy of the productive structure were analyzed under three different conditions by using the relevant characteristic equations. In general, it was found that the thermoeconomic cost of a flow consists of two parts, namely the monetary cost of the fuel exergy (natural gas in the present study) needed to produce the flow, that is, its thermoeconomic cost and the costs due to the productive process (cost of capital equipment, maintenance, etc.). The results show that the steam leaving the boiler has the lowest exergy cost, while the condenser has the highest. The sequential quadratic programming (SQP) algorithm was used to obtain the optimized solutions of each major component of the plant. It was found that substantial operational and capital cost benefits were realized by optimizing most of the major plant equipment (boiler, turbines, feedwater heaters and the pumps). However, optimization of the condenser did not yield any cost benefit in capital equipment cost, but did produce some savings in operational cost.
Introduction: Abstract
An exergetic analysis of the main subsystems of a steam power plant was undertaken. The product of the entropy generation rate and the effective temperature (modified Gouy–Stodola law) was used to calculate the real power/exergy loss due to various irreversibilities present in the subsystems. The use of the effective temperature was deemed appropriate because its derivation depends on the dynamics of the subsystem(s) under consideration. The results of our analysis showed that the highest rate of entropy generation occurred in the condenser while the maximum rate of entropy generation for the feed-water heaters was observed in the Heater6. The reasons for these observations were discussed in the relevant sections of this study. The greatest power/exergy loss was observed in double-flow low-pressure turbine. This could be due to the fact that the inlet steam was split and flows axially in opposite directions through the turbine blades, thereby creating a lot of irreversibilities. According to the results of this study, it can be concluded that the most irreversible unit in the system is the condenser with 149 MW loss and followed by the high-pressure turbine with a power/exergy loss of 22.5 MW. It was observed that when environmental temperature was used in the calculation of power/exergy loss, the results always underestimated the real power/exergy loss. Finally, the results of this study have highlighted the areas where the various subsystems of the power plant can be improved. The analyses show where the largest quantities of power/exergy loss are being dissipated within the plant, thus pointing to areas where improvement in energy usage can be made. The use of exergy as a potential design and retrofit tool was also pointed out.
Introduction: Abstract
An exergetic analysis of the main subsystems of a steam power plant was undertaken. The product of the entropy generation rate and the effective temperature (modified Gouy–Stodola law) was used to calculate the real power/exergy loss due to various irreversibilities present in the subsystems. The use of the effective temperature was deemed appropriate because its derivation depends on the dynamics of the subsystem(s) under consideration. The results of our analysis showed that the highest rate of entropy generation occurred in the condenser while the maximum rate of entropy generation for the feed-water heaters was observed in the Heater6. The reasons for these observations were discussed in the relevant sections of this study. The greatest power/exergy loss was observed in double-flow low-pressure turbine. This could be due to the fact that the inlet steam was split and flows axially in opposite directions through the turbine blades, thereby creating a lot of irreversibilities. According to the results of this study, it can be concluded that the most irreversible unit in the system is the condenser with 149 MW loss and followed by the high-pressure turbine with a power/exergy loss of 22.5 MW. It was observed that when environmental temperature was used in the calculation of power/exergy loss, the results always underestimated the real power/exergy loss. Finally, the results of this study have highlighted the areas where the various subsystems of the power plant can be improved. The analyses show where the largest quantities of power/exergy loss are being dissipated within the plant, thus pointing to areas where improvement in energy usage can be made. The use of exergy as a potential design and retrofit tool was also pointed out.
Introduction: Abstract
A combined study based on laser Doppler velocimetry measurements and numerical simulation was undertaken in order to validate the KIVA II numerical code. A rapid intake and compression machine was used for the experimental studies, while the numerical calculations were performed with the KIVA II computational fluid dynamics code. The k-? turbulence model was used to represent the effects of turbulence. The measured mean radial and tangential velocities were found to be generally higher than their computed counterparts. The differences between these velocities range from 3.5% to 7% in magnitude for the re-entrant bowl chamber while they vary from 5% to 11% for the bowl-in-piston combustion chamber configuration. The measured values of the turbulence intensity close to the bowl axis in the re-entrant bowl chamber configuration were fairly accurately predicted, but the quality of the prediction diminishes as the bowl entrance region was approached. The values of the turbulence intensity were however, poorly reproduced near the axis of the bowl-in-piston chamber assembly. The experimental and predicted values of turbulence were found to differ by between 7% to 20% (re-entrant chamber) and 13% to 20% (bowl-in-piston). The results of the experimental data that were obtained from this study show that the random uncertainties in the mean radial and tangential velocities for both chamber configurations range from ±13.2% to ±19.2%, while the uncertainties in the root mean square velocities were about ±14.1%.
Introduction: Abstract. The main objective of this work is to present a brief tutorial
on minimal surfaces. Furthermore, a great deal of effort was made to
make this presentation as elementary and as self-contained as possible.
It is hoped that the materials presented in this work will give some motivations for beginners to go further in the study of the exciting field of
minimal surfaces in three dimensional Euclidean space and in other ambient spaces. The insight into some physical systems have been greatly
improved by modeling them with surfaces that locally minimize area. Such
systems include soap films, black holes, compound polymers, protein folding, crystals, etc. The relevant mathematical field started in the 1740s but
has recently become an area of intensive research. This is due to the wide
spread availability of powerful and relatively inexpensive computers combined with the preponderance of suitable graphical application software
packages. Furthermore, its anticipated use in the medical, industrial and
scientific applications have provided further impetus for research efforts
concerning minimal surfaces. It was shown that minimal surfaces do not
always minimize area. Some advantages and shortcomings of the Weierstrass equations were highlighted. Finally, the proofs of Bernstein’s and
Osserman’s theorems were sketched out and some other improved (sharp)
versions of these theorems were presented followed by some remarks on
surfaces of finite total curvature.
Introduction: A single expression for estimating the nominal pitting strength of steel materials, based on surface hardness, is developed from first principles for a reliability of 99% at 10 7 load cycles. It requires the hardness values to be measured in Vicker's hardness scale. The expression may be used for any steel material processed by hot rolling, cold drawing, quenching and tempering or case-hardening. The formulation incorporates a nominal design factor at 99% reliability which is estimated from a probabilistic model based on the lognormal probability density function. Pitting strength estimates from the expression are compared with those of American Gear Manufacturers Association (AGMA) estimates and data from other sources as indicated in Tables 3 and 4. The expression predicts lower values at low hardness but higher values at high hardness. The variance is between-15.21% and 10.13% for through-hardened steels. For case-hardened steels, the variances range from 14.23% to 20.26% between the estimates and available data. These variances appear to be reasonable considering the many factors involved in pitting resistance. The main advantage of this study is that pitting strength of new steel materials may be estimated for initial design sizing without long and costly contact fatigue testing which of course is necessary for design validation. Also, the estimation method developed may be applied to other materials, metallic and non-metallic. Suggestions are made for estimating some pertinent pitting strength adjustment factors when considering field or service pitting strength.
Introduction: A method for the design sizing task of cylindrical worm gearsets is presented that gives an estimate of the initial value of the normal module. Expressions are derived for the worm pitch diameter of integral and shell worms as well as for the active face width of the gear and the threaded length of the worm. An attempt is made to predict the contact strength of bronze materials against scoring resistance. Four Examples of design sizing tasks of cylindrical worm gears are carried out using the approach presented and the results are compared with previous solutions from other methods. The results for first three examples show excellent comparisons with previous solutions of American Gear Manufacturers Association (AGMA) method. The results of the fourth example are slightly more conservative than those of DIN3999 but are practically similar. Therefore, it appears that a systematic, reliable and more scientifically based method for cylindrical worm drive design sizing has been developed
Introduction: A revised Lewis bending fatigue stress capacity model for spur gears is presented and used to study the influence of mesh friction on root stress. It took the original Lewis formula and made modifications for dynamic loads, shear stress, and mesh friction in spur gear design. The study reveals that mesh friction may increase bending stress by up to 6% in enclosed cylindrical gear drives when an average mesh friction coefficient of 0.07 is assumed. A possible increase of 15% in root stress may occur in open gear drives when the mesh friction coefficient is taken as 0.15, a value considered to be representative for properly maintained open drives. To account for mesh frictional load and other factors directly influencing mesh friction, a friction load factor of 1.1 is suggested and introduced to gear service load estimation for enclosed gear drives and 1.15 for open gear drives.
Introduction: Abstract
Background:
During operation, cylindrical gearset experiences tangential, radial, and axial (helical gears only) force components that induce bending, compressive, and shear stresses at the root area of the gear tooth. Accurate estimation of the effective bending stress at the gear root is a challenge. Lewis was the first person who attempted estimating the root bending stress of spur gears with some reasonable accuracy. Various gear standards and codes in use today are modifications and improvements of the Lewis model.
Objective:
This research aims at revising the Lewis model by making adjustments for dynamic loads, shear stresses, axial bending stress for helical gears, and stress concentration factor that is independent on the moment arm of tangential or axial force component.
Methods:
An analytical approach is used in formulating a modified formula for the root bending stress in cylindrical gears starting with the original Lewis model. Intermediate expressions are developed in the process and works from many previous authors are reviewed and summarized. The new model developed is used to estimate the root bending stress in four example gearsets of 0o to 41.41o helix angle and the results are compared with those of AGMA (American Gear Manufacturers Association) formula.
Results:
Analysis from the examples shows that neglecting the radial compressive stress over-estimated the root bending stress by 5.27% on average. When shear stresses are ignored, the root bending stress is under-estimated by 7.49% on average. It is important, therefore, to account for both compressive and shear stresses in cylindrical gear root bending stress. When the root bending stress estimates from the revised Lewis model were compared with AGMA results, deviations in the range of -4.86% to 26.61% were observed. The stress estimates from the revised Lewis formulae were mostly higher than those of AGMA.
Conclusion:
The new root bending stress model uses stress concentration factors (normal and shear) that are independent of the point of load application on the gear tooth. This decoupling of stress concentration factor from the load moment arm distinguishes the new model from AGMA formula and brings bending stress analysis in gear design in line with classical bending stress analysis of straight and curved beams. The model can be used for both normal contact ratio and high contact ratio cylindrical gears.
Introduction: A numerical simulation code was used to conduct a systematic study of the effects of fuel-air equivalence ratios in the range 0.7 ? ? ? 1.4 and
compression ratio, rc = 8.0 on key operating parameters, such as pressure, rate of change of pressure, ‘/dt’ flame extinction temperature, burn rate frequency, combustion efficiency, ?b, source term, mass burn fractions and heat loss in a simulated 5.734 liter, V8 spark-ignition engine. The data shows that the burn rate characteristics of the fuel and oxidizer are qualitatively perfectly correlated. The results also show that as flame
extinction/flameout is approached, the fuel consumption rate, Rfu increases rapidly with temperature for fuel-air equivalence ratios, ? in the range
0.7 ? ? ? 1.4. The average burn rate frequency (per second), fbr(1/s) varies from 11.2 ? fbr ? 137.0 for fuel-air equivalence ratios, ? in the
range 0.7 ? ? ? 1.4 The results further show that the fastest fuel consumption rate was for fuel-air equivalence ratio, ? = 1.4 in the time
interval, t such that 0.0 ? t ? 0.61 ms while the slowest corresponds to ? – 0.7 and the corresponding time interval was 0.0 ? t ? 3.98 ms. Moreover, the data shows that for fuel-air equivalence ratios, ? in the range 0.7 ? ? ? 1.4 the fuel consumption rate increases monotonically after the initial ignition delay period. The combustion efficiency, ?b of the engine under investigation were found to be in the range of 94.1% ? ?b ? 94.4% for lean mixtures, that is, for ? < 1.0;the corresponding values of combustion efficiency, ?b for fuel-rich mixtures were in the interval 93.8% ? ?b ? 94.1%. The other results from this study are summarized in the conclusion.
Introduction: Based on the Tredgold geometric approximation, a transparent contact stress capacity model for straight bevel gears is presented. A bevel load
factor is defined which provides a kinetic link between the physical bevel gear and virtual spur gear. Three design cases of contact stress
computations from different references are carried out and compared with AGMA estimates. Differences in results vary from 2.4% to 23.4% with the
new model estimates, generally lower than AGMA values. The design sizing version of the new model is applied in two design cases.
Comparison of the service load factor values for design sizing and design verification indicates a difference of 0.76% in case 4 and -1.65% in case
5. While more design cases are necessary for further verification of the design approach presented, it may however, be concluded from the results
of our study that the design model presented appears reasonable.
Introduction: Based on the Tredgold geometric approximation, a transparent contact stress capacity model for straight bevel gears is presented. A bevel load
factor is defined which provides a kinetic link between the physical bevel gear and virtual spur gear. Three design cases of contact stress
computations from different references are carried out and compared with AGMA estimates. Differences in results vary from 2.4% to 23.4% with the
new model estimates, generally lower than AGMA values. The design sizing version of the new model is applied in two design cases.
Comparison of the service load factor values for design sizing and design verification indicates a difference of 0.76% in case 4 and -1.65% in case
5. While more design cases are necessary for further verification of the design approach presented, it may however, be concluded from the results
of our study that the design model presented appears reasonable.
Introduction: The open source Field Operation and Manipulation (OpenFOAM) software was used to investigate the performance of a fully premixed, modern high-performance 4-valve, ISO-octane, dual overhead cam (DOHC) engine with quasi-symmetric pent roof combustion chamber running at 1500 revolutions per minute. The peak pressure occurred at the TDC and had a value of about 30 bar. The results from this study show that the maximum combustion temperature occurred at approximately 95 degrees crank angle ATDC and has a volume averaged value of about 2700°K , whereas the actual computed peak temperature was found to be about 3000ºK and it occurred at grid point 12630. The other temperatures which were found to be higher than the volume averaged temperature were found to be in the range 2968.81 ° K to 2974.01 ° K and correspond to grid point positions 12630 to 12633.The flame-wrinkling factor, X = St / Su was found to be in the range 1.0 ? X ? 3.8. The dynamics of the regress variable b was accurately predicted.
Introduction: A single expression for estimating the nominal pitting strength of steel materials, based on surface hardness, is developed from first principles for a reliability of 99% at 107 load cycles. It requires the hardness values to be measured in Vicker's hardness scale. The expression may be used for any steel material processed by hot rolling, cold drawing, quenching and tempering or case-hardening. The formulation incorporates a nominal design factor at 99% reliability which is estimated from a probabilistic model based on the lognormal probability density function. Pitting strength estimates from the expression are compared with those of American Gear Manufacturers Association (AGMA) estimates and data from other sources as indicated in Tables 3 and 4. The expression predicts lower values at low hardness but higher values at high hardness. The variance is between - 15.21% and 10.13% for through-hardened steels. For case-hardened steels, the variances range from 14.23% to 20.26% between the estimates and available data. These variances appear to be reasonable considering the many factors involved in pitting resistance. The main advantage of this study is that pitting strength of new steel materials may be estimated for initial design sizing without long and costly contact fatigue testing which of course is necessary for design validation. Also, the estimation method developed may be applied to other materials, metallic and non-metallic. Suggestions are made for estimating some pertinent pitting strength adjustment factors when considering field or service pitting strength.
Introduction: Helical bevel gears have inclined or twisted teeth on a conical surface and the common types are skew, spiral, zerol, and hypoid bevel gears. However, this study does not include hypoid bevel gears. Due to the geometric complexities of bevel gears, commonly used methods in their design are based on the concept of equivalent or virtual spur gear. The approach in this paper is based on the following assumptions, a) the helix angle of helical bevel gears is equal to mean spiral angle, b) the pitch diameter at the backend is defined as that of a helical gear, and c) the Tredgold’s approximation is applied to the helical gear. Upon these premises, the contact stress capacity of helical bevel gears is formulated in explicit design parameters. The new contact stress capacity model is used to estimate the contact stress in three gear systems for three application examples
and compared with previous solutions. Differences between the new estimated results and the previous solutions vary from -3% and -11%, with the new estimates being consistently but marginally or slightly lower than the previous solution
values. Though the differences appear to be small, they are significant because the durability of gears is strongly influenced by the contact stress. For example, a 5% reduction in contact stress may result in almost 50% increase in durability in
some steel materials. The equations developed do not apply to bevel crown gears.