React patterns have advanced and grown, you can now follow multiple approaches to rendering a component. One such pattern is the Compound Component.
Have you ever been in the position that you need to render multiple components that share the same state in multiple places? For example let’s build out a input component that has a label. Both components respond to validation. Let’s build a very basic example.
We have an App component which will render a Input which responds to validation. Below is the code for InputWithValidation:
As we can see above we have a component that renders a span and a input tag, when the input is changed it performs some validation and updates the isValid state value. This is fine however what if we want to change how this is rendered. For some reason we want to have the label below the input. We could hack a solution to the existing code to handle this. We could add an additional prop to InputWithValidation which sets the placement of the label.
So now we are checking if the position is top then rendering before the input or checking of position is bottom and rendering after the input. This works but is hacky and is hard to maintain. For example if we add a sub label we need to then repeat the process for it. This is where the compound component pattern comes into play. It allows the consumer to use the component how they wish to use it.
We can see now that we have access to the inner components and are able to arrange the display logic how we with. However these components are still fully aware of the parent state of theInputWithValidation component. But how is this built inside theInputWithValidation component?
First we need to add some static properties to theInputWithValidation class. These properties are actually functional React components, they are functional as all data they need will be passed as props.
We can see that the two static properties are components that deconstruct the isValid prop, the span also takes the children to allow for the customer label. The input takes the change prop to allow the validation to be performed. These get passed in via the render method below:
The render method returns the value of mapping over each child and cloning the child then passing new props to the child with isVald and change. We use the React children API as it provides methods that can interact with the Reach Children opaque data structure. We then use React.cloneElement which clones and return a new React element using the current element as the starting point. The resulting element will have the original element’s props with the new props merged in shallowly. This allows use to add additional props to the children passed from the parent container.
This provides us with a extendable and customisable component that the consumer can restructure and mold as they wish.
I recently build a react development utils package and thought I would share my reasons and how I build this package.
PropTypes also allows you to set if a prop is required, this will display a warning message in the browser console when this prop has not been provided. Note this only happens in development mode. This can help developers understand required props for components, its very useful when the component you are using has no or little documentation.
However there could be a case when you need to link props, e.g. if you have a Modal component that has a isOpen prop which is required and a name prop which only exists when the model is open. The name is required also but only really needed when the modal is open. prop-type-utils to the rescue, using the custom prop type function you can create your own PropType checks.
PropTypes provides a custom function PropType, this allows you to write your own checks. The check function takes the component props, the prop name and the component name. As show below:
As you can see when preforming the check you can return a Error if the check fails, else return nothing. From this I decided to build out my own utils.
Prop Type Utils is a collection of useful prop type validation rules.
I had the pleasure of attending this years BelTech conference hosted in the amazingly designed Titanic building. The conferences was based on a few topics including smart cities, startups and practitioners. I focused on the smart cities and practitioners.
The opening keynote was presented by Kate Atkinson the co founder of Datasnap.io a company focused on the development of products that provide data and insight into customer engament. Using proximity tracking technology DataSnap tracks the movement of patrons at venues and provides data to the customer. Kate talked about the challenges of working with customers and trying to improve the data they require. One of the big requests from customers was to have live maps of the location to watch live data of patrons and how the move around a venue. This can be hard to provide when no real detailed floor maps are available. The solution for this was to map the proximity sensors to rooms or areas. Instead of tracking the location of a patron track their movement between sensors. The result gives data on how many patrons move from A to B then to C and how many move from A to C without stopping at B. This allows the customer to workout why patrons are skipping B and find ways to change this.
The smart cities session was run with a number of mayors from across the UK, US and europe. The discussion started off talking about the state of smart cities currently, the different cities that were brought into question were; Skien (Norway), Lowell (Massachusetts), South Dublin (Ireland) and Donegal (Ireland). The main theme from each mayor was the focus on bringing companies together to provide a better smart city. However this does come with some challenges, the mayor of Donegal talked about issues with the structure of the local government and how parts had been broken up and segregated from other resources such as online payment and the management of customer interactions. This does however provide a gap that can be filled with smart technologies. The mayor of South Dublin made points about how people expect more from the city with the higher taxes being introduced, this is to help provide more services such as free wifi. Whereas the mayor of Lowell focused on how the city was already smart and push more resources into the existing schemes and even starting more projects. These schemes are the usual that people think of when you hear smart city, funding startups and university based incubator programs. A change from the free wifi, online services and tech startups the mayor of Skien focused on how the growth of the smart city can actually benefit the citizens health and lifestyle. One issue facing all cities is the increased life expectancy which in turn puts more pressure on the heath care system. Skien is focused on building smart homes and integrating smart systems into people’s lives to help protect and improve their way of life.
My take away from the talk, smart cities are becoming a big part of the political thought process and are not a thing of the future, some locations are better suited than others. This can be driven on the geographical location such as remote area’s (always hard to provide high speed broadband etc). Some other reasons could be linked to the technical competence of the local government as some of the mayors were not the most conversed in modern tech and this was clear at point of the talk.
You can read more on the talks below:
The Internet of Things
The main theme of the discussion was that IoT (Internet of Things) was everywhere. Michael Crossey (Intel) discussed the changes in IoT and how it is related to the dropping cost of tech. This is related to Moore’s law “The observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future.”
Moore’s law can create great value and improve IoT, Intel are using this to build a reference model for vendors to create and expand IoT and help grow their products. John Shaw from Kingspan talk primarily on his own experiences with IoT with his business which provides smart solar solutions. Their solar PV adoption is powered by Intel Quark, Kingspan sees the IoT as a way of connecting not only devices but people, animals and objects in a smart way. John was very quick to brag about the advancements of Kingspan and how their head office is a carbon negative office. IoT helped to achieve this goal by using smart systems that talk to each other and allow the office to work efficiently.
Working with Large Systems
Joe Hughes of Glen Dimplex also focused on how his company was making advancements in the home and commercial cooling/heating by using IoT devices. Using smart heaters customers could control the heating in an energy efficient way. A system built with cloud controlled heating and having direct links to customers systems allows Glen Dimplex to provide an advanced service. Joe also talked about how this could in fact help the national grid. Joe explained that being able to monitor customers on site devices they can monitor energy usages and patterns, which can allow the national grid control the spinning up and down of resources. A question was put to the panel on the security implications of these systems. A product that talks to 1000’s of customer devices and also can talk to the national grid could have the potential to bring down one of the largest connected systems in the country. This is a valid point but as with any product the security must be a high priority.
IoT in Health
Gareth Tolerton of Total Mobile talked about the impact IoT has on health care and how capturing data on patients can help to provide a better health care system. Working with tech like smart locks can provide safe and much needed services to patients that receive at home care. Allowing the programmatic access to care workers. This system addresses issues with people who feel isolated and vulnerable as care workers can be allowed access without having to obtain keys for a patient’s home, if a care worker no longer treats or is linked to a patient then the access can be revoked. Other IoT systems can help to provide personally tailored health care for patients, but as mentioned by Gareth work is still needed to provide trust in the data collected and sufficiently protect the data.
One of my main concerns with the growing IoT market space is the maturity of the network and infrastructure currently in place. I recently watched a promotional video for a smart bike pedal that connected via a sim card to the internet and provided the rider with data. I struggle to see the feasibility of devices like this and other IoT devices. I’m currently with a major mobile provider and I struggle to get a full connection when in the centre of Belfast. Bandwidth, connection speeds and coverage are still big issues faced by many people. If we flood the networks with devices fighting for connection we could cripple the already struggling infrastructure.