Category Archives: Cloud Servers
Newly launched Cloud-shaped Internet Hosting by cdmon, the future of Hosting – Digital Journal
cdmon wants to present its new infrastructure, the most innovative Cloud in Europe, using the newest technology to provide an excellent service to their customers
cdmon wants to provide quality innovation in a reasonable, transparent, and cordial way to its customers, so therefore it has created the fastest Cloud Hosting in Europe. cdmon has developed this new project with a changed infrastructure thanks to its platform entirely based on Intel Optane SSD and NVMe (Non-Volatile Memory express) SSD disks. This means that its Cloud is 10x faster than the ones based on normal SSD disks, making it the fastest and most secure Cloud in all of Europe. Only the best for its customers.
This newly developed technology gives cdmons customers the best shot at being successful in their projects. Its team of experts, obsessed with innovation and learning about up-and-coming technologies, is continually learning so they can consistently provide the latest technologies to their customers, and offer services of the highest quality. This also applies to cdmons customer care service, the best rated in Spain, available 24/7 to help their customers solve all their doubts and get the perfect product for their project.
But even though having the fastest Cloud Hosting platform in Europe is very important, it is only a small portion of what cdmon can offer. cdmons focus is on their customers, making it possible that they can change their lives, do their projects, and expand. For this reason, it wants to give their customers the best service and an exceptional performance so they can make their projects soar.
cdmon invites technology enthusiasts who are interested in transforming their lives and look forward to changing the world. For this reason, it offers the best quality and security on all its products. And all its products offer so much more
All its hostings include benefits that you wont find anywhere else: from wildcard and multidomain SSL certificates to daily backups that any customer will be able to restore from its Control Panel. cdmon believes in its products so much that there is no fixed term contract: once a customer sees all that cdmons hosting provides, they wont want to leave.
But who is cdmon?
It is a Spanish-based company formed in 2002 that has become the leading company as a hosting and domain provider. Its headquarters are located in Malgrat de Mar, but the team can be found spread throughout the Peninsula and Europe. cdmon wants to create an open and quality Internet where everyone can fit, and it wants to do that by focusing on its customers projects and giving them the best services. Discover all you can do with the best and fastest Cloud-hosting and join the more than 200,000 projects that have relied on cdmon throughout the last 20 years.
Media ContactCompany Name: CdmonEmail: Send EmailCountry: SpainWebsite: https://www.cdmon.com/en/
The rest is here:
Newly launched Cloud-shaped Internet Hosting by cdmon, the future of Hosting - Digital Journal
IoT harmony? What Matter and Thread really mean for your smart home – Ars Technica
Enlarge / Matter promises to make smart home devices work with any control system you want to use, securely. This marketing image also seems to promise an intriguing future involving smart mid-century modern chairs and smart statement globes.
CSA
The specification for Matter 1.0 was released on Tuesdayall 899 pages of it. More importantly, smart home manufacturers and software makers can now apply for this cross-compatibility standard, have their products certified for it, and release them. What does that mean for you, the person who actually buys and deals with this stuff?
At the moment, not much. If you have smart home devices set up, some of them might start working with Matter soon, either through firmware upgrades to devices or hubs. If you're deciding whether to buy something now, you might want to wait to see if it's slated to work with Matter. The first devices with a Matter logo on the box could appear in as little as a month. Amazon, Google, Apple, and Samsung's SmartThings division have all said they're ready to update their core products with Matter compatibility when they can.
That's how Matter will arrive, but what does Matter do? You have questions, and we've got... well, not definitive answers, but information and scenarios. This is a gigantic standards working group trying to keep things moving across both the world's largest multinational companies and esoteric manufacturers of tiny circuit boards. It's a whole thing. But we'll try to answer some self-directed questions to provide some clarity.
CSA
What is Matter? Where did it come from?
Matter is maintained by the Connectivity Standards Alliance (CSA), which was previously known as the ZigBee Alliance. ZigBee is an IEEE 802.15.4 specification for a low-power, low-data-rate mesh network that is already in use by Phillips' Hue bulbs and hubs, Amazon's Echo and Eero devices, Samsung's SmartThings, Yale smart locks, and many smaller devices. It had pretty good buy-in from manufacturers, and it proved the value of mesh networking.
Starting with that foundation, the CSA somehow built up momentum to push for something people want more than an iterative networking standard: a guarantee that if they buy, or develop, a smart home device, they won't have to figure out which corporate allegiances that device can work with. The mission was to "simplify development for manufacturers and increase compatibility for consumers," the ZigBee Alliance said, and the new standard was called CHIP, or "Connected Home over IP."
That standard was renamed Matter, then delayed, more than once. Stacey Higginbotham, a reporter focused on IoT, cited the COVID-19 pandemic and the group's rapidly scaling size for its earliest delays. This week, with 550 members of the CSA involved in Matter standards development and a "fall 2022" release target arriving, Higginbotham heard from insiders that the Matter group felt pressured to release something, even if it was scaled back from its original promises. And as you might imagine, a lot of bugs and questions come up when more than 250 previously siloed companies start working together on something.
So Matter is just a new ZigBee with more corporate buy-in?
No, Matter is an interoperability standard, with many connection options available to devices. Under Matter, devices can talk to each other over standard Wi-Fi, Ethernet, Bluetooth Low-Energy, or Thread, another IEEE 802.15.4 standard (we'll get to Thread a bit later).
If you have an extensive network already set up with ZigBee or Z-Wave, it might still fit into a Matter network. Hub makers are gradually announcing firmware updates to allow for Matter compatibility, allowing them to serve as a bridge between their mesh and Matter-ready controllers and devices. Before it rebranded as the CSA, the ZigBee Alliance announced that it would work with the Thread Group to create compatible application layers.
Originally posted here:
IoT harmony? What Matter and Thread really mean for your smart home - Ars Technica
Telecom Cloud Market to Hit $103.6 Billion by 2030: Grand View Research, Inc. – Benzinga
SAN FRANCISCO, Oct. 6, 2022 /PRNewswire/ --The global telecom cloud market size is expected to reach USD 103.6 billion by 2030, according to a new report by Grand View Research, Inc. The market is anticipated to expand at a CAGR of 19.9% from 2022 to 2030. A telecom cloud is a next-generation network architecture that integrates cloud-native technologies, network function virtualization, and software-defined networking into a distributed computing network. Orchestration and automation are essential since the computing and network resources are scattered across clouds and locations.
Key Industry Insights & Findings from the report:
Read 120-page full market research report, "Telecom Cloud Market Size, Share & Trends Analysis Report By Component (Solution, Services), By Deployment Type, By Service Model, By Application, By Enterprise Size, By Region, And Segment Forecasts, 2022 - 2030", published by Grand View Research.
Telecom Cloud Market Growth & Trends
Telco Cloud refers to shifting communications service providers (CSPs) from vertically integrating proprietary hardware-based infrastructure networks to cloud-based technologies. It is mainly used in the telecom business to refer to multi-cloud computing. The propelling drivers of the telecom industry are increased customer satisfaction, corporate agility, cost savings, and others. Also, the usage of standard computational hardware and automation reduces CapEx and OpEx resulting in increased adoption of telco cloud in the telecommunication industry.
It also delivers innovative bespoke B2B solutions, such as telcos may bring highly customized corporate products to market rapidly and affordably. Telco cloud makes it simple to collaborate with business service partners by providing access to public cloud services from any device, at any time. Additionally, it protects your consumers and profits from competitors; for instance, the telco cloud enables operators to swiftly alter business models to test new goods, services, and pricing schemes.
It also makes setting up new consumer experiences and communication channels easier. Furthermore, the lower CapEX and OPEX needs of telco cloud, better service resilience, and capacity to respond swiftly to faults and demand changes allow operators to maintain service levels and competitive pricing. These advantages result in lower client attrition.
The top trends in the telecom cloud industry are hybrid cloud hosting, Cloud Native Network Functions (CNNF), and telecom cloud collaboration. A hybrid cloud merges private and public clouds where the software and data are interoperable and portable. It allows telcos to optimize the operations with various patterns to manage workload. It improves resource allocation, optimizes infrastructure spending, provides enhanced organizational agility, and offers the ability to scale using the public cloud and controls available in the private cloud deployment.
Also, in the case of CNNF, Software-defined networking is replaced by NFV (Network Functions Virtualization), which provides more independence from proprietary servers and hardware. It provides a cloud-native architecture that combines VNFs and CNFs while adopting 5G features. This will provide maximum market coverage to telecom businesses looking to expand their services. Moreover, telecom cloud collaboration includes partnerships between hyperscalers and telcos which constitute a major cloud computing trend transforming the business.
Cloud service providers and telecom enterprises join forces to expand edge computing collaboration and 5G. Telecom cloud service providers are increasing their connectivity with the help of technology advancement to gain a competitive edge over their peers and capture a significant market share.
Telecom Cloud Market Segmentation
Grand View Research has segmented the global telecom cloud market based on component, deployment type, service model, application, enterprise size, and region:
TelecomCloud Market - Component Outlook (Revenue, USD Billion, 2017 - 2030)
Telecom Cloud Market - Deployment Type Outlook (Revenue, USD Billion, 2017 - 2030)
Telecom Cloud Market - Service Model Outlook (Revenue, USD Billion, 2017 - 2030)
Telecom Cloud Market - Application Outlook (Revenue, USD Billion, 2017 - 2030)
Telecom Cloud Market - Enterprise Size Outlook (Revenue, USD Billion, 2017 - 2030)
Telecom Cloud Market - Regional Outlook (Revenue, USD Billion, 2017 - 2030)
List of Key Players of Telecom Cloud Market
Check out more related studies published by Grand View Research:
Browse through Grand View Research's Communications Infrastructure IndustryResearch Reports.
About Grand View Research
Grand View Research, U.S.-based market research and consulting company, provides syndicated as well as customized research reports and consulting services. Registered in California and headquartered in San Francisco, the company comprises over 425 analysts and consultants, adding more than 1200 market research reports to its vast database each year. These reports offer in-depth analysis on 46 industries across 25 major countries worldwide. With the help of an interactive market intelligence platform, Grand View Research Helps Fortune 500 companies and renowned academic institutes understand the global and regional business environment and gauge the opportunities that lie ahead.
Contact:
Sherry JamesCorporate Sales Specialist, USAGrand View Research, Inc.Phone: 1-415-349-0058Toll Free: 1-888-202-9519Email: sales@grandviewresearch.comWeb: https://www.grandviewresearch.comGrand View Compass| Astra ESG SolutionsFollow Us: LinkedIn | Twitter
Logo: https://mma.prnewswire.com/media/661327/Grand_View_Research_Logo.jpg
SOURCE Grand View Research, Inc
Continue reading here:
Telecom Cloud Market to Hit $103.6 Billion by 2030: Grand View Research, Inc. - Benzinga
The Biden administration issues sweeping new rules on chip-tech exports to China – Protocol
"I would say down the road, we will be known for more than just security. And we're starting to see that today," Kurtz said.
CrowdStrike brings plenty of credibility from its work in cybersecurity to its effort to penetrate the broader IT space, according to equity research analysts who spoke with Protocol. The company recently disclosed surpassing $2 billion in annual recurring revenue, just 18 months after reaching $1 billion. And even with CrowdStrikes scale, it's continued to generate revenue growth in the vicinity of 60% year-over-year in recent quarters.
In a highly fragmented market like cybersecurity, this type of traction for a vendor is unique, said Joshua Tilton, senior vice president for equity research at Wolfe Research. "They're sustaining [rapid] growth and profitability, which is very rare in this space."
At the root of CrowdStrike's surge in adoption is its cloud-native software platform, which allows security teams to easily introduce new capabilities without needing to install another piece of software on user devices or operate an additional product with a separate interface. Instead, CrowdStrike provides a single interface for all of its services and requires just one software agent to be installed on end-user devices.
As a result, CrowdStrike can tell existing customers who are considering a new capability, You already have our agent turn it on, try it out, Kurtz said. And if you like it, keep it on. It's that easy.
For years, Kurtz has touted the potential for CrowdStrike to serve as the "Salesforce of security" thanks to this cloud-based platform strategy. But at a time when cybersecurity teams are looking to consolidate on fewer vendors and are short on the staff needed to operate tools, CrowdStrike's approach is increasingly resonating with customers, analysts told Protocol.
The company has now expanded well beyond endpoint detection and response, a category it pioneered to improve detection of malicious activity and attacks (such as ransomware and other malware) on devices such as PCs. Along with endpoint protection, CrowdStrike now offers security across cloud workloads, identity credentials, and security and IT operations.
The cloud-native platform concept is still early on for cybersecurity, but if CrowdStrike's momentum continues, it's poised to potentially become the first "fully integrated, software-based platform" in the security industry, Tilton said. That's in contrast to other platform security vendors that are hampered by architectures that predated the cloud, or that rely on hardware for some of their functionality.
"CrowdStrike's DNA is that they've come as a cloud-native company with a focus on security from day one," said Shaul Eyal, managing director at Cowen. "It does provide them with an edge."
Even with CrowdStrikes advantages, there are no guarantees it will maintain a leading position in a market as large and competitive as endpoint security. There, the company faces a fierce challenge from Microsoft and its Defender product. Its a topic that Kurtz is outspoken as ever about.
In regards to Microsoft, "if you are coming out with zero-day vulnerabilities on a weekly basis, which are being exploited, that doesn't build trust with customers," Kurtz said.
"I'm not saying they're not going to win deals. Because they're Microsoft, sure, they're going to win some deals," he said. "But we do see deals boomerang back our way when someone has an issue. Many of the breaches that we actually respond to [are for customers with] Microsoft endpoint technologies in use."
Even so, Microsoft brings plenty of advantages of its own in terms of its security approach, analysts told Protocol. Much of the business world counts itself as part of the Microsoft customer base already, and the company has seen major success in bundling its Defender security product into its higher-tier Office 365 productivity suite, known as E5. As of Microsoft's quarter that ended June 30, seats in Office 365 E5 climbed 60% year-over-year, the company reported.
And for every CISO who thinks it doesn't make sense to trust Microsoft on security due to vulnerabilities in its software products, there is another CISO who thinks Microsoft's ubiquity in IT is exactly why the tech giant is worth leveraging for security, Tilton said.
Beyond the successful bundling strategy, Microsoft has overall done "an exceptional job of elevating security within their product portfolio," said Gregg Moskowitz, managing director and senior enterprise software analyst at Mizuho Securities USA.
Still, "we do typically hear that Microsoft has limitations when it comes to what an enterprise's requirements are across some of these cybersecurity areas," including on endpoint, Moskowitz said. At the same time, "we do believe Microsoft's going to get a lot stronger over time," he said.
IDC figures have shown CrowdStrike in the lead on endpoint security market share, with 12.6% of the market in 2021, compared to 11.2% for Microsoft. CrowdStrike's growth of 68% in the market last year, however, was surpassed by Microsoft's growth of nearly 82%, according to the IDC figures.
Still, Kurtz argued that CrowdStrike has the leg up in endpoint for plenty of other reasons beyond the lack of the same security baggage via vulnerability issues at Microsoft.
The chief advantage goes back to CrowdStrike's single-agent architecture, which he said requires fewer staff to operate and has a lower impact on user devices. That translates to better performance and less use of memory because the product does not rely on analyzing digital patterns, known as signatures, for signs of an attack.
I would say down the road, we will be known for more than just security. And we're starting to see that today.
All of these factors need to be considered when doing the math around how much it will cost to implement an endpoint security product into an operation, Kurtz said. Based on that math, "we are significantly cheaper to operationalize than Microsoft," he said.
CrowdStrike has particularly stood out with customers when it comes to the lower performance impact from its Falcon product line, said John Aplin, an executive security adviser at IT services provider World Wide Technology.
The company recently worked with one of the largest U.S. banks to select a new endpoint security product, and the choice came down to CrowdStrike or Microsoft Defender, he said. While the bank was initially tempted to utilize its E5 licensing and go with Defender, Aplin said, extensive testing revealed Falcon's comparatively lighter-weight impact on devices, prompting the customer to pick CrowdStrike.
Performance impact is not a trivial thing when customers are often running 40 to 70 different security tools, he said. So while being able to provide reliable security is obviously important, the "operational effectiveness" in areas such as performance impact on devices is "where CrowdStrike always wins," he said.
The reputation for trustworthy security that CrowdStrike has built since its founding in 2011 shouldn't be minimized as a factor either, according to Wolfe Research's Tilton.
By and large, CISOs make purchasing decisions "based on the amount of minutes of sleep at night" they expect to get from a product, he said. CrowdStrike's "first-mover" advantage in endpoint detection and response is a huge one, and its brand awareness is virtually unmatched in security, probably on par only with that of Palo Alto Networks, Tilton said.
While some smaller challengers, chiefly SentinelOne, have made headway in the endpoint security space, they have an uphill battle, he said. In endpoint security, "the CISO has to have a good reason to not buy CrowdStrike."
In categories outside of endpoint security, CrowdStrike doesn't yet enjoy the same stature. But in some areas, such as identity security, it's on track to get there quickly.
Misuse of credentials has emerged as the biggest source of breaches by far as workers have moved outside of the protections of the office firewall, according to Verizon. While CrowdStrike isn't trying to compete with identity management vendors such as Okta or Ping Identity, the company does believe it's found a sweet spot in helping customers to counter identity-based threats, Kurtz said.
Following its fall 2020 acquisition of identity security vendor Preempt Security, CrowdStrike has added identity protection and detection capabilities to its platform, and customer adoption has been "like a rocket ship," Kurtz said. During CrowdStrikes fiscal second quarter, ended July 31, customer subscriptions to the company's identity protection module doubled from the previous quarter.
That's a "stunning level of adoption from customers," Mizuho's Moskowitz said. Given that CrowdStrike paid $96 million for Preempt, "that's clearly one of the best small to midsize acquisitions that weve seen in software in recent years," he said.
CrowdStrike refers to its various add-on security capabilities as modules, and currently has 22 in total, up from 11 in late 2019. A forthcoming module based on the companys planned acquisition of startup Reposify will be aimed at spotting exposed internet assets for customers, bringing CrowdStrike into the very buzzy market for external attack surface management.
Besides identity protection, the companys other fastest-growing module at the moment is data observability, based on its early 2021 acquisition of Humio, which was recently rebranded to Falcon LogScale. And while highly applicable to security, observability focuses on tracking and assessing many types of IT data. Observability enables customers to "do things that are not just security-related," Kurtz said, such as deploying software patches and taking other actions to improve IT hygiene.
George Kurtz, CEO of CrowdStrike. Photo: Michael Short/Bloomberg via Getty Images
In total, CrowdStrike reported that it was generating $2.14 billion in annual recurring revenue as of its latest quarter, with its "emerging products" category contributing $219 million. ARR for those emerging products which include identity protection and observability, but not more-established areas for CrowdStrike, such as workload protection surged 129% from the same period a year before.
Looking ahead, "we'll continue to solve problems that are outside of core endpoint protection and workload protection, but are related, in the IT world," Kurtz said.
Even within cybersecurity itself, CrowdStrike's emphasis on observability "shows that the industry is starting to recognize that cybersecurity is a data problem," said Deepak Jeevankumar, a managing director at Dell Technologies Capital, who had led an investment by the firm into Humio.
CrowdStrike has no ambitions to get into areas such as network or email security, Kurtz noted. But if a certain business challenge involves collecting and evaluating data from endpoints or workloads, whether that's IT or security data, "we can do that," he said.
Application security is another future area of interest, Kurtz said. Given the criticality of many business applications, "understanding their security, who's using them, how they're being used that's important for organizations of many sizes to have that level of visibility and protection."
Within security, CrowdStrike is also notably embracing an approach that's come to be known as extended detection and response, or XDR, for correlating data feeds from a variety of different security tools. CrowdStrike's XDR approach taps into data both from its own products and from third-party tools, including vendors in its CrowdXDR Alliance that have technical integrations with CrowdStrike.
While XDR is no doubt an industry buzzword, it's the most effective way yet to put the pieces together and understand how a cyberattack occurred, Kurtz said. "Before XDR, we were sort of blind to how [an attacker] got to the endpoint," he said. "Now were able to tell the whole story."
CrowdStrike offers a number of managed security services as well, which the vendor was quick to recognize as an important option amid the cybersecurity talent shortage, according to Peter Firstbrook, vice president and analyst at Gartner.
CrowdStrike actually perfected this, Firstbrook said. They ran into this roadblock early. Customers said, Look, this [technology] is really cool. But we don't have anybody that can manage it.
Ultimately, CrowdStrike is well positioned at a time when CISOs are fed up with going to dozens of different vendors to meet their security needs, Cowen's Eyal said. The current refrain from CISOs is, "'We want to deal with the Costco or the Walmart, the big supermarket, for all of our security needs,'" he said. In that respect, "the platform approach is absolutely going to be benefiting [vendors] like CrowdStrike."
Over the years, Kurtz said he hasn't backed away from comparing CrowdStrike with Salesforce for a good reason: It's a meaningful comparison, which has only gotten more so as time has gone on.
"I've said this since I started the company, that we wanted to be that 'Salesforce of security' to have a true cloud platform that would allow customers to do more things with a single-agent architecture," he said. "We haven't really deviated from that."
See the rest here:
The Biden administration issues sweeping new rules on chip-tech exports to China - Protocol
BOB Recruitment 2022 for IT Professionals: Check Vacancies, Apply Online Till Oct 24 – StudyCafe
BOB Recruitment 2022 for IT Professionals: Check Vacancies, Apply Online Till Oct 24
BOB Recruitment 2022: Bank of Baroda (BOB) is looking for qualified candidates for Cloud Engineer, Application Architect, Enterprise Architect, Infrastructure Architect, Integration Expert and Technology Architect positions. The total No. of vacancies for these posts is 12. Interested candidates should review the job description and apply using the link provided in the official notification. Interested candidates who have B.E./ B.Tech. in Computer Science or Information Technology will be given preference. The last application submission date is 24.10.2022 (23:59 hours).
Candidates are requested to apply for the job post before the deadline. No application shall be entertained after the stipulated time/ date. Incomplete applications and applications received after the specified time/ date shall be REJECTED. All the details regarding this job post are given in this article such as BOB Recruitment 2022 official Notification, Age Limit, Eligibility Criteria & much more.
1. Cloud Engineer: Interested candidates who have B.E./ B.Tech. in Computer Science or Information Technology will be given preference. Minimum 10 years of Technical and IT experience out of which at least 5 years experience in the field of cloud computing.
2. Application Architect: B.E./ B.Tech. in Computer Science or Information Technology. The interested candidate should have a minimum of 10 years of Technical and IT experience out of which at least 5 years of experience as an Application Architect. Experience as Application Architect in Alternate Delivery Channels (eg: CBS, LOS, LMS, etc.) will be preferred. Experience in AGILE Methodology/Core JAVA/LINUX/UNIX Server preferred
3. Enterprise Architect: B.E./ B.Tech. in Computer Science or Information Technology. Minimum Experience Minimum 10 years of Technical and IT experience out of which at least 5 years experience in architecting, designing and managing banking platforms.
4. Infrastructure Architect: B.E./ B.Tech. in Computer Science or Information Technology. Minimum 10 years of Technical and IT experience out of which at least 5 years experience in architecting, designing and managing banking platforms.
5. Integration Expert: B.E./ B. Tech. in Computer Science or Information Technology Candidates with Professional certifications in OS (Unix/Linux), Middleware, Storage, and Load Balancer will be preferred. Minimum 10 years of Technical and IT experience out of which at least 5 years experience in designing and building large IT infrastructure projects.
6. Technology Architect: B.E./ B.Tech. in Computer Science or Information Technology. Minimum 10 years of Technical and IT experience out of which at least 5 years experience in the integration process of banking platforms.
The minimum age limit to apply for this job recruitment is 32 years maximum age limit is 45 years.
Remuneration will be offered based on the candidates qualifications, experience, overall suitability, last drawn salary of the candidate and market benchmarks for the respective posts, and shall not be a limiting factor for suitable candidates.
1. Cloud Engineer:
a. Design, implement and manage secure, scalable, and reliable cloud infrastructure environments.
b. Propose and implement cloud infrastructure transformation to modern technologies and methods used to run microservices application architectures.
c. Building, troubleshooting, and optimizing container-based cloud infrastructure.
d. Ensure operational readiness for launching secure and scalable workloads into public and hybrid cloud environments.
e. Validate existing infrastructure security, performance and availability and make recommendations for improvements and optimization.
f. Ensure Backups, resilience, and business continuity.
g. Implement infrastructure best practices.
2. Application Architect:
a. Design and validate application architecture design and other technology architecture.
b. Estimate design efforts, define detailed schedules, evaluate technologies, develop prototypes, and architect design.
c. Change application Architecture as per business needs and Technology changes.
d. Understand and apply architect principles, processes, standards and guidelines.
e. Understand, document, and monitor application layering dependencies (User-Interface, Deployment, Public Interface, Application Domain, Application Infrastructure, Technical Frameworks, and Platforms) and application component dependencies.
f. Document and maintain context diagrams, functional architectures, data architecture, and messaging architecture diagrams and descriptions.
g. Understand and monitor impacts to and dependencies between existing technical and network environments and etc.
3. Enterprise Architect:
a. Set up technical standards and governance structure for the enterprise.
b. Assist business strategy and accordingly drive technology strategy from an architecture perspective.
c. To Provide technology architecture expertise and guidance across multiple business divisions & technology domains
d. Setting up technical standards, formulation of Enterprise Architecture (EA) Governance Framework.13
f. Driving technology strategy from an architecture perspective, across a portfolio of applications in the Bank, for resource optimization and Risk mitigation.
g. Translating business requirements into specific system, application, or process designs, including working with business personnel and executives to identify functional requirements.
h. Define/ maintain Target Architectures in Roadmaps.
i. Lead and/or assist efforts to scope and architect major change programs, leading strategic options analysis & proposing end-to-end solutions & highlighting trade-offs.
j. Review ongoing designs of major programs to identify strategic opportunities and resolve design issues during delivery.
4. Infrastructure Architect:
a. Assist the development of the overall technology strategy with a critical focus on enterprise and platform architecture.
b. Responsible for the design of systems and interfaces both internal and external.
c. Identifying and integrating overall integration points in the context of a project as well as other applications in the environment.
d. Defining guidelines and benchmarks for non-functional requirement considerations during project implementation.
e. Review architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, non-functional requirements, etc., against a predefined checklist and ensure that all relevant best practices are followed.
f. Providing a solution to any issue that is raised during code/design review and justifying the decision taken.
5. Integration Expert:
a. Designing, articulating and implementing architectural scalability.
b. Work in close collaboration with the application architect to ensure optimal infrastructure design.
c. Draw a long-term enterprise-level IT Infrastructure Plan.
d. Ensure that availability requirements are met in the design.
e. Validate all Infrastructure Changes and obtain necessary approvals from the competent authority.
f. Interact with IT Partners, Consultants and internal stakeholders.
g. Evaluate infra technology and industry trends, and identify the prospective impact on business.
h. Participate to develop and manage ongoing enterprise architecture governance structure on basis of business & IT strategies.
i. Promote organization architecture process and results to business and IT Departments.
j. Lead and direct to prepare governing principles to guide decision-making Equivalent to infrastructure architecture.
6. Technology Architect:
a. Collaborate on the successful integration of hardware, software and Internet resources.
b. Strong experience in Middleware and Infrastructure management.
c. Assist in planning and implementing a variety of technological opportunities.
d. Assist in the creation, maintenance, and integration of technology plans.
e. Ability to lead teams to successful end results
f. Strategic planning and continuous improvement mindset, relevant to technology processes and systems Assess technology skill levels of co-workers and customers.
1. Cloud Engineer:
a. Strong consulting experience with large-scale migrations to Cloud Providers such as Azure, AWS, Google, and IBM.
b. Knowledge of infrastructure solutions, platform migration, system security, and enterprise directories.
2. Application Architect:
a. Experience as Application Architect in Alternate Delivery Channels (eg: CBS, LOS, LMS etc.).
b. Experience in AGILE Methodology/Core JAVA/LINUX/UNIX Server preferred.
c. Deep understanding of cloud computing & in one or more of the following domains: Core Platform: Compute (Iaas & Paas), Storage, Networking.
3. Enterprise Architect:
a. Strong knowledge of enterprise architecture and design, including architecture frameworks such as TOGAF (TOGAF certification preferred).
b. Strong knowledge of technologies such as APIs, SOA, programming languages, cloud hosting practices and big data technologies.
4. Infrastructure Architect:
a. Overall understanding of banking technologies systems and processes. A track record of having successfully built innovations.
b. Implemented core banking. Delivery channels, payment systems and other digital banking solutions.
5. Integration Expert:
a. Experience in designing and building large IT infrastructure projects encompassing both Hardware, Virtualization and middleware layers.
b. Candidates with Professional certifications on OS (Unix/Linux), Middleware, Storage, and Load Balancer are preferred.
6. Technology Architect:
a. Understanding of IT architecture like SOA and integration methodologies like ESB and APIs.
b. Strong knowledge of development environments, middleware components, databases and open-source technologies.
c. Understanding of solutions for platform and application layers.
Interested candidates are advised to visit the Banks website http://www.bankofbaroda.co.in (Career Page Current Opportunities section) for further details or you may follow the following link for applying for the said post: The last date of submission of the application is 24.10.2022 (23:59 hours).
Step 1: Go to the BOB official website.
Step 2: Search for the BOB Recruitment 2022 Notification here.
Step 3: Read all of the information in the notification.
Step 4: Apply and submit the application form in accordance with the mode of application specified in the official notification.
To Read Official Notification Click Here
Disclaimer: The Recruitment Information provided above is for informational purposes only. The above Recruitment Information has been taken from the official site of the Organisation. We do not provide any Recruitment guarantee. Recruitment is to be done as per the official recruitment process of the company or organization posted the recruitment Vacancy. We dont charge any fee for providing this Job Information. Neither the Author nor Studycafe and its Affiliates accepts any liabilities for any loss or damage of any kind arising out of any information in this article nor for any actions taken in reliance thereon.
Here is the original post:
BOB Recruitment 2022 for IT Professionals: Check Vacancies, Apply Online Till Oct 24 - StudyCafe
DoorDash Hacker Incident Illustrates Third-Party Vendor Risks and Potential Vulnerabilities – JD Supra
Hackers have increasingly focused on third-party vendors as avenues to data held by associated businesses. On August 25, 2022, DoorDash announced that it had experienced a data breach which impacted the personal information of certain customers and drivers. After detecting unusual activity originating from one of its third-party vendors, an investigation by DoorDash revealed that the vendor was the target of a phishing campaign. This comes just a few years after DoorDash customer data was breached in a similar hack in 2019, which was also linked to a third-party vendor. Unfortunately, DoorDash is not alone in experiencing the security risks linked to many third-party vendors.
Several companies have been exposed to data breaches by their third-party vendors in recent years. These hacks have resulted in lawsuits from consumers as well as government investigations. Failing to secure consumer data and monitor the cybersecurity practices of third-party vendors may open businesses up to state and federal enforcement actions.
Third-party vendors have significant access to the systems and data used by the companies that they work with. Many enterprises also contract with more than one third-party vendor, increasing the number of ways that information could be leaked. Hackers have learned to exploit this access by targeting the third-party vendors, who may have less stringent cybersecurity measures than associated businesses. Third-party vendors may be more vulnerable to phishing attacks, like the one used to breach DoorDash, in which hackers use compromised emails to gain access to sensitive data. They have also been the targets of increased ransomware efforts and attacks against outdated hosting services that leave information open for unauthorized use.
Many companies may not discuss data security policies with their third-party vendors, which means they could inadvertently be trusting their customers information with others who are not prepared to prevent breaches. While companies are focused on the security of their own networks, they should be aware that the vulnerabilities of their third-party vendors may pose an even greater risk to their customer data. Failing to assess and guard against these risks leaves businesses vulnerable to lawsuits from their consumers as well as government enforcement actions.
To minimize some of these risks, companies should prioritize cyber and data security when working with third-party vendors. Companies should ensure that any third-party vendor they contract with has a cybersecurity plan that includes regular testing of their protocols, documented efforts to fix any vulnerabilities, and communicating best practices with employees. Before agreeing to work with a vendor, businesses should ask how the vendor identifies data incidents and what their plan is to address any incident that may arise. Companies should also be sure to monitor what internal data each vendor has access to and consider whether the third-party vendors security policies are sufficient compared to their own policies. Access controls should be implemented to monitor third-party data usage and alert to any unauthorized access that might originate with a third-party vendor.
Contract language should also be drafted with data security in mind. To ensure fast and effective responses to cyber threats, third-party vendors should be obligated to report data breach incidents that they discover within a designated timeframe. Specific security requirements may also be established within a vendor contract. In the event that a data breach does occur, companies should consider adding an indemnity clause that would hold third-party vendors liable for any breach caused within their organization.
Bottom Line
Businesses should be aware of the cybersecurity risks associated with third-party vendors. When working with third-party vendors, companies should consider and assess the vendors security protocols. Both businesses and third-party vendors alike should invest in cyber insurance, and businesses should include strong indemnification language in their contracts with third-party vendors.
See the article here:
DoorDash Hacker Incident Illustrates Third-Party Vendor Risks and Potential Vulnerabilities - JD Supra
CRITICALSTART Announces Enhanced Threat Detection and Response Capabilities to Support Microsoft Defender for Servers – PR Newswire
New service offering is the company's first threat detection and response solution to support the Microsoft Defender for Cloud product portfolio
PLANO, Texas, Oct. 5, 2022 /PRNewswire/ -- Today, Critical Start, a leading provider of Managed Detection and Response (MDR) cybersecurity solutions, announced the upcoming availability of its MDR service offering for Microsoft Defender for Servers, part of the Microsoft Defender for Cloud product portfolio. The new service will bring Microsoft customers unique capabilities to investigate and respond to attacks on workloads running in the cloud and help stop business disruption.
As business growth demands increase, enterprises are continuing to recognize the many advantages gained by adopting cloud computing services. The benefits include greater agility, lower infrastructure costs, faster deployment and superior availability. At the same time, cloud-based solutions have become an easy target for attacks because of their increased exposure to the Internet. In 2021, over 88% of organizations experienced cyberattacks on their cloud-native applications and infrastructure.1
Cloud Workload Protection (CWP) solutions, like Microsoft Defender for Cloud, bring security teams visibility and integrated threat protection across cloud workloads with automated security to detect and stop suspicious activity. These same security teams have the overarching challenge of being able to properly deploy, manage and optimize the solution as business needs change, in addition to being able to investigate and respond to evolving attacks before they disrupt business.
The Critical Start MDR service, working alongside Microsoft Defender for Servers, will empower security administrators by helping them monitor, investigate and respond to security alerts and incidents at cloud speed. The combination of Critical Start's industry-leading Zero Trust Analytics Platform (ZTAP), which can auto-resolve false positives at scale, and its human-led monitoring, investigation and response, security teams can maximize performance to identify and contain a breach much more quickly. The Critical Start Security Operations Center can respond on behalf of Microsoft's customers to stop attacks on elastic and ephemeral cloud workloads.
"Utilizing cloud services can provide organizations with tremendous business value, but it is often coupled with a barrage of distinctive security challenges. Microsoft Security Solutions continue to lead the industry at addressing those challenges," said Randy Watkins, CTO at Critical Start. "As a Microsoft Security Design partner, we are excited to further extend our collaboration to address the unique and dynamic needs of our mutual customers and reduce the risk of security incidents in the cloud."
This new offering is part of a robust portfolio of services and solutions Critical Start offers for Microsoft Security. The company also has MDR offerings for Microsoft Sentinel, Microsoft Defender for Endpoint and Microsoft 365 Defender. Critical Start's MDR service for Microsoft Defender for Servers is anticipated to reach general availability in early 2023.
For more information on Critical Start and its solutions, please visit http://www.criticalstart.com/.
About Critical StartToday's enterprise faces radical, ever-growing, and ever-sophisticated multi-vector cyber-attacks. Facing this situation is hard, but it doesn't have to be. Critical Start simplifies breach prevention by delivering the most effective managed detection and incident response services powered by the Zero Trust Analytics Platform (ZTAP) with the industry's only Trusted Behavior Registry (TBR) and MOBILESOC. With 24x7x365 expert security analysts, and Cyber Research Unit (CRU), we monitor, investigate, and remediate alerts swiftly and effectively, via contractual Service Level Agreements (SLAs) for Time to Detection (TTD) and Median Time to Resolution (MTTR), and 100% transparency into our service. For more information, visit criticalstart.com. Follow Critical Start on LinkedIn, @CRITICALSTART, or on Twitter, @CRITICALSTART.
1 - Enterprise Strategy Group - Unifying Security Controls to Manage Security Risk Across Cloud Environment: Helping Customers Efficiently Protect Their Critical Workloads in the Cloud, May 2021
SOURCE CRITICALSTART
See the original post:
CRITICALSTART Announces Enhanced Threat Detection and Response Capabilities to Support Microsoft Defender for Servers - PR Newswire
Why Cloud Data Modernization Is Needed, and How to Make It Work – Acceleration Economy
When it comes to data, one fact has endured from the origin of mankind: it is inextricably linked to the decision-making process. The more data that we can include in our analysis, the more we can understand the past and navigate the future effectively.
Practices in the capture and storage of business data often from diverse global sources must evolve in response to the skyrocketing quantity of data that businesses produce and their need to act on it faster than ever. One research firm, Statista, forecasts that there will be 181 zettabytes of data by 2025, up from 97 zettabytes this year. A zettabyte is one billion terabytes. The chart below depicts this growth trajectory.
Companies can only store, manage, and act on data at the required speed by modernizing their data infrastructure. To do so, they need to move past the legacy construct of monolithic systems, which store a single type of data in siloed fashion with no movement of data between them. By modernizing such systems in the cloud, companies enable unification of data with robust new functionality and services that dont exist in legacy systems.
To understand the value of modernizing data in the cloud, its helpful to start with this baseline, data-oriented definition of the cloud, a vast network of remote servers hooked together and meant to operate as a single ecosystem. These servers store and manage data, run applications, or deliver content or a service such as streaming videos, web mail, office productivity software, or social media. Users can access files and data from any Internet-capable device making information available anywhere, anytime.
Because of complexity, silos, and the need to have vast amounts and sources of data accessible at high velocity, the need to modernize data infrastructure takes on more urgency every day. Moving data to the cloud is the most compelling option because the cloud will deliver (at least) three critical benefits:
The cloud allows any organization to ingest, analyze and contextualize data at high speed. And we all know that fast decision-making and real-time actions are key to capitalizing on business opportunities in the Acceleration Economy.
In addition, the cloud requires low to no maintenance on the part of the customer, improving security and protection of data and systems, as well as data recovery in case of any threat or incident. This is especially important for highly regulated industries that require large volumes of historical data and regulated compliance by implementing business rules that apply to many systems and tools at once.
There is not a magic recipe for any organization to transition from traditional or monolithic data systems to a cloud data system. That entails moving from a physical infrastructure that has been designed as a reflection of a traditional, hierarchical organization towards something that is more flat, horizontal, and collaborative with fewer boundaries and barriers.
However, there are some cloud data modernization recommendations that should hold true in virtually all industries and use cases:
While the points above are ordered based on a logical sequence, the first point, relating to people, must be addressed at the outset. First, moving to the cloud challenges the status quo (data ownership, silos, org structure) of many organizations. With cloud technology, we are moving from a practice of data to report to decision to a more streamlined practice of data to decision; the implications of this new paradigm can be highly impactful.
Join us on October 27, 2022 for Acceleration Economys Data Modernization Digital Battleground, a digital event in which four leading cloud vendors answer questions on key considerations for updating data strategies and technology. Register for free here.
So, when embarking upon modernization of the data stack, a company should start by educating (or re-educating) the entire workforce, starting from the top of the hierarchy, about being open and transparent, practicing collaboration among teams (which team generates and analyzes specific types of data), delegating more decisions to others, and learning about new technologies and tools. Once the cultural element has been addressed, let engineers and technical people handle the technical aspects of cloud data modernization.
Once migration and modernization have happened, the tech team must stay in close contact with the cloud infrastructure vendor(s) and have a clear understanding about the responsibilities of each party. It is very important to actively monitor cloud performance, storage, and applications usage as well as vendor billing practices known as FinOps. Close internal monitoring of billing, combined with good communication with the provider(s), facilitates solid operational results and keeps the cloud provider(s) fully engaged on your behalf.
There are numerous vendors offering cloud solutions, but again, each and every organization is unique, with a different vision, strategy, and goals. It is easy to understand, therefore, why each vendor is more suitable for certain use cases, industries, and businesses, so a deep understanding of each vendors product offering is critical before adopting one solution over others.
An evaluation of vendor strengths and alignment with your business goals and culture must include:
In the analysis above, Ive focused on the why and how of data modernization in the cloud and shared important technical considerations.
Theres one more critical technology factor to consider, and thats the vendor or partner you select to execute on your data modernization goals. In the table below, Im presenting the companies from my direct, hands-on experience and ongoing engagement that are the best candidates to help you, and some key strengths they offer. These companies, of course, are the subject of ongoing analysis at the Acceleration Economy site.
Looking for more insights into all things data? Subscribe to the Data Revolution channel:
Read this article:
Why Cloud Data Modernization Is Needed, and How to Make It Work - Acceleration Economy
Is a 10-Year-Old Facebook Technology the Future of Cloud Security? – Security Boulevard
In the pantheon of semi-obscure open source tools, osquery is one that deserves a closer look from most security professionals. Its easy to see why this old Facebook tool that was originally used to query operating system data has flown under the radar. Initially, it was used to improve the usability of Facebook across different platforms; there were a few individuals, mainly on the west coast of the U.S., who saw a hidden superpower in osquery that could upend the way security is managed. Because osquery lets you query nearly all of an operating systems data like a database with rich, standardized telemetry, it effectively creates an insanely powerful EDR tool. One that gives you broad visibility into exactly what is going on with an OS and ask questions about your security posture. It essentially lets a team with the right know-how perform outsized threat hunting, faster detection and remediation, implement YARA rules and more.
These superpowers created a small but very dedicated user base who were either active users or intrigued by what osquery could do.
But for all of osquerys might, there was a catch that prevented wider adoption. The open source version of osquery required knowledge of SQL and wasnt necessarily that easy to implement as part of a security stack. Also, in an increasingly cloud-native world, the open source version was at first limited to endpoints and was difficult to scale to cloud use. Theres now a version from Uptycs that doesnt require knowledge of SQL, and it is a very powerful tool for securing laptops and other endpoints, Linux servers and more. However, we now live in a cloud-first and cloud-native world. So is osquery still relevant?
Something that will become almost immediately apparent to any adept user of osquery is that it is almost infinitely scalable and flexible. That flexibility means that osquery is free to break out of its traditional domain of laptop endpoints, on-premises Linux servers and data centers and to secure the cloud. At the end of the day, osquery is just a way to query data points in an operating system. With some tinkering, it can be used in cloud environments like AWS, Azure or GCP, in container environments like Kubernetes or even, in theory, with identity providers or SaaS tools.
This flexibility effectively means that this open source tool can be used by organizations to monitor everything from developer laptops to the identity authenticator devs use to sign in to services. It can get structured telemetry from SaaS apps and container instances where code is built and tested and from cloud services where the code is ultimately deployed and run. This can all be done from a single platform using a single tool.
Take a moment to think about how radical of a departure this is for the security community. Were used to buying single-use tools for each environment that each operate in their own silo, and has its own data model and own set of rules. We then try to assimilate them into a stack and use an aggregator like a SIEM to try and pull all of the information together into a single source of truth. If a vendor of one of those products branches into another space, say an EDR vendor that moves into cloud workload, its usually done with a bolt-on acquisition of another company or technology that is often poorly integrated and implemented, and the data is often difficult to access or piece together into a unified picture. Not surprisingly, this way of doing things has led to gaps in visibility, alert fatigue and frustration. This presents obvious challenges when todays high-growth companies are relying on a complex innovation supply chain to produce the code that powers their technology.
The transition to the cloud is only accelerating, but with the industry attention focused on addressing the cloud threats that have been dominating the news, traditional endpoints are getting left behind. No matter how well streamlined your cloud security platform is, if its not including endpoints like developer laptops or on-premises Linux servers, you are giving up crucial visibility into your innovation life cycle. With reduced visibility comes risk.
For many security leaders, osquery flies under the radaror, in some cases, it is not even on the mapas a solution to these problems. But it shouldnt be. The ability of osquery to ingest and structure data so that its almost infinitely queryable is a superpower that can enable security teams to secure their entire ecosystem and future-proof their security stack. No matter what environments or operating systems your organization uses, osquery can help your security teams quickly and efficiently find the questions to almost any security, posture or configuration question. If youre worried about the posture of endpoints, osquery can answer those questions. But it can also answer questions about lateral movement in container pods or misconfigurations in AWS too.
Osquery is an open source tool that has the power to transform how we secure the cloud and makes a strong case for itself as one of the most powerful and flexible security solutions ever created.
Recent Articles By Author
Continue reading here:
Is a 10-Year-Old Facebook Technology the Future of Cloud Security? - Security Boulevard
The benefits of cloud-native software in financial services – The Paypers
Michael Backes, Co-Founder and CTO at receeve explores the benefits of cloud-native software especially when it comes to powering collections operations.
Security has always been a delicate topic when it comes to financial technology, especially within the banking industry and partnering with third-party software providers. What are the real risks behind cloud-based? How does cloud-native differ from on-premise alternatives? receeve experts share more.
A collaborative approach to service digitisation cuts both the time and cost of companies by allowing them to focus their critical resources on their core business. They also gain the added benefit of bringing in domain-specific expertise on a specific part of the business, increasing reliability and positive outcomes.
Increasing service digitisation empowers businesses, as its more comprehensive data analysis and reporting improve access to customer insights. This facilitates detailed customer segmentation and offers opportunities to boost service levels through increased digital competence across multiple channels.
In the case of collections operations, businesses can avoid upfront hardware investments and costly ongoing maintenance - with the option to scale up or down to meet current demands. This serves institutions that are looking to focus their resources on their core business. Similarly, companies seeking to arm their teams with a tech stack that enables scalability and independence from IT departments can ensure resource assignment is optimised.
Ultimately, consumers benefit from an improved service offering, allowing them to streamline the products they use and sustain patronage with the companies meeting their needs.
The potential risks that arise from partnering with third-party companies can be many and varied: data exposures, failures in regulatory compliance, the adoption of inadequate security protocols, and more. If not taken into account, these issues can yield significant legal and reputational consequences.
In some instances, risks are increased when vendors outsource elements of their own service to third-party providers. This is because security protocols, levels of transparency, and data protection policies can vary from business to business.
As financial services providers increase their third-party dependence, it becomes essential to identify critical services and ensure effective oversight of both system tolerances and security risks. And since the financial sector is inherently interconnected, with multiple entities across the value chain, businesses must consider central risks before onboarding new vendors, including the threat of data breaches, unauthorised access to internal information, and the disclosure of intellectual property.
Since updates to legal and regulatory frameworks around data access and management are common, it can be a risk to simply assume your third-party vendor is safeguarding your operational and commercial compliance. This is evidenced by the fact that 64% of data breaches are linked to third-party operators - and the average data breach costs businesses over USD 7.5 million to remedy.
To mitigate these potential risks, many companies employ cybersecurity risk management controls that include vetting third-party security practices and establishing data breach and incident report protocols. Unfortunately, these measures are often resource-intensive and costly.
An added consequence of third-party software use is the potential for outages and system failures - often from oversights at the implementation stage - leading to interrupted service for customers. As with data breaches, these gaps in usability can often be reputationally damaging and costly to resolve. Many vendors, therefore, employ a continuous deployment approach, automating the building, testing, and rollout stages of the software delivery process with each iteration.
A large number of third-party software providers choose an alternative methodology, opting for longer production cycles that allow for increased testing prior to delivery, to reduce risk once the product is live.
Though many of these risks are associated with third-party vendors, the development of proprietary software also carries with it many of the same potential pitfalls, requiring ongoing maintenance and robust security systems. To achieve this, large financial outlays are necessary, to ensure ongoing development, support, and maintenance.
As outlined, many businesses conduct rigorous testing and vetting processes to ensure new vendors meet their commercial and operational needs, from a delivery, support, and legal compliance standpoint. Still, third-party companies themselves can shore up security levels by separating sensitive data from primary system infrastructures - ideally with the use of a single-tenant cloud-based environment.
On-premise applications are, as the name suggests, applications that are stored and run at a single premise - with data only being generated, stored, and accessed locally. A primary example would be an office with multiple computers running Microsoft Word. While the application may be installed or run across multiple computers, the files and documents created on a given machine will only be accessible by users logging into the same computer.
Cloud-native applications, on the other hand, maximise accessibility and eliminate reliance on a centralised storage source. They cut out the need for investment in expensive servers and allow for fast scaling, doing away with application developments, system management, and server-to-server integration.
Crucially, cloud-based applications have the added benefit of offering simple, pain-free integrations, since they use APIs to quickly facilitate communication between multiple systems and programs. This ensures your tech stack operates as a single, coherent application, letting you connect multiple tools at the click of a button.
Cost-effectiveness, efficiency, and interoperability are key factors for businesses adopting new technologies. Additionally, with no upfront hardware investments and maintenance costs, collections teams can scale their operations up or down at a moments notice with speed and ease. Better still, cloud-native applications will automatically update and ensure ongoing support as new digital systems become available.
With data-driven, cloud-based applications, businesses can eliminate the stresses of maintaining legacy systems and implementing non-cloud-native software, letting their collections teams focus on essential tasks. This frees up opportunities for staff to refine customer segmentation approaches and develop more successful collection strategies in the long term.
Michael Backes has 20 years in the tech industry as an entrepreneur helping organisations transform their legacy frameworks into digital-first models. In 2019 Michael brought his experience building next-generation financial services to the debt management industry and co-founded and launched receeve GmbH, a cloud-native solution for the collections & recovery industry. receeve is venture capital funded and growing the team aggressively in the EU & LatAm markets. receeve transforms debt management with a comprehensive data layer and ML/AI helping internal teams recover more by optimising processes, strategies, engagement, and asset management.
Read more:
The benefits of cloud-native software in financial services - The Paypers