<p><strong>Data Modeling and Analysis</strong></p><ul><li>Design data models and optimize performance: Creating the structure of data relationships ensuring efficient data retrieval and calculations.</li><li>Create calculated columns and measures: Using DAX to calculate derived values and aggregate metrics.</li><li>Perform exploratory data analysis (EDA): Using BI tools to explore data, identify trends, and patterns.</li><li>Apply advanced data analysis techniques (e.g., statistical analysis, time series analysis, predictive modeling).</li><li>Integrate machine learning models into Power BI dashboards.</li><li>Experience building semantic models</li></ul><p><strong>Dashboard Development and Visualization</strong></p><ul><li>Designing dashboards: Creating visually appealing and interactive dashboards.</li><li>Creating visualizations: Using charts, graphs, and other visual elements to represent data.</li><li>Implementing interactivity: Adding filters, slicers, and drill-down capabilities.</li><li>Expertise in SQL and DAX and knowledge of Python, R.</li><li>Strong proficiency in Power BI.</li><li>Data modeling and visualization skills.</li><li>Strong problem-solving skills to address technical challenges and data quality issues.</li><li>Analytical skills with capacity to analyze complex data problems and draw meaningful insights.</li></ul>
<p>I’m building a world-class team to power our next generation of data products. We’re looking for a Senior Data Engineer who knows AWS inside and out—someone who can <strong>design secure, scalable data pipelines</strong>, <strong>own ETL/ELT workflows</strong>, <strong>engineer cloud data infrastructure</strong>, and <strong>deliver dimensional and semantic models</strong> that our analysts, data scientists, and applications can trust.</p><p>You’ll work closely with product, security, platform engineering, and analytics to move our architecture toward a <strong>real-time, governed, cost-aware</strong>, and <strong>highly automated</strong> data ecosystem.</p><p><strong>What You’ll Do</strong></p><ul><li><strong>Design & build end-to-end pipelines</strong> on AWS (batch and streaming) using services like <strong>Glue, EMR, Lambda, Step Functions, Kinesis, MSK</strong>, and <strong>Fargate</strong>.</li><li><strong>Develop robust ETL/ELT</strong> (PySpark, Spark SQL, SQL, Python) for structured, semi-structured, and unstructured data at scale.</li><li><strong>Own data storage & processing layers</strong>: <strong>S3 (Lake/Lakehouse), Redshift (or Snowflake on AWS), DynamoDB</strong>, and <strong>Athena</strong> with strong partitioning, compaction, and performance tuning.</li><li><strong>Implement data models</strong> (3NF, dimensional/star, Data Vault, Lakehouse medallion) for analytics and operational workloads.</li><li><strong>Engineer secure infrastructure-as-code</strong> with <strong>Terraform</strong> (or <strong>CDK</strong>) across multi-account setups; implement CI/CD via <strong>GitHub Actions</strong> or <strong>AWS CodeBuild/CodePipeline</strong>.</li><li><strong>Harden security & governance</strong>: use <strong>IAM</strong>, <strong>Lake Formation</strong>, <strong>KMS</strong>, <strong>Secrets Manager</strong>, <strong>VPC/PrivateLink</strong>, <strong>GLUE Catalog</strong>, and fine-grained access controls. Partner with SecOps on compliance (e.g., <strong>SOC 2</strong>, <strong>FedRAMP</strong>, <strong>HIPAA</strong> depending on dataset).</li><li><strong>Observability & reliability</strong>: build monitoring with <strong>CloudWatch</strong>, <strong>OpenTelemetry</strong>, and data quality checks (e.g., <strong>Great Expectations</strong>, <strong>Deequ</strong>), implement SLOs and alerts.</li><li><strong>Champion best practices</strong>: code reviews, testing (unit/integration), documentation, runbooks, and blameless postmortems.</li><li><strong>Mentor</strong> mid-level engineers and collaborate on architectural decisions, standards, and technical roadmaps.</li></ul><p><br></p>
<p>We are seeking an experienced IT Monitoring & Observability Engineer to support enterprise monitoring, performance, and availability across a complex IT environment. This role is responsible for managing and optimizing a unified monitoring and event management platform, driving actionable insights, improving alert quality, and supporting 24x7 operations.</p><p>The ideal candidate has strong hands‑on experience with OpenText Operations Bridge Manager (OBM) and related monitoring tools, deep knowledge of infrastructure monitoring, and a solid understanding of ITIL/ITSM practices.</p><p><br></p><p>Key Responsibilities</p><ul><li>Support and manage a unified Configuration Management Database (CMDB), ensuring accuracy and standardization</li><li>Collect, aggregate, and analyze monitoring and performance data to support ITIL processes including:</li><li>Configuration</li><li>Event</li><li>Capacity</li><li>Availability</li><li>Demand</li><li>Incident and Problem Management</li><li>Assess, tune, and optimize monitoring capabilities to deliver accurate, actionable alerts for 24x7 operations teams</li><li>Design, create, and maintain intuitive dashboards showing real‑time and historical service health and performance</li><li>Configure, maintain, and optimize monitoring dashboards across diverse infrastructure components</li><li>Deploy, manage, and update Management Packs, connectors, and monitoring policies</li><li>Perform event correlation, suppression, and filtering to reduce alert noise and improve incident triage</li><li>Integrate data from third‑party monitoring tools into a centralized event console</li><li>Conduct proactive performance and availability monitoring, identify root causes, and implement preventive measures</li><li>Support continuous improvement of monitoring strategy, tooling, and operational effectiveness</li></ul>
<p>Overview</p><p>We are seeking an experienced Storage & IPv6 Administrator to support the administration, maintenance, and optimization of enterprise storage systems within an IPv6‑enabled network environment. This role is responsible for ensuring high availability, performance, security, and reliability of storage infrastructure while supporting both IPv4 and IPv6 configurations.</p><p>The ideal candidate has strong hands‑on experience with SAN/NAS environments, backup and disaster recovery operations, and storage performance monitoring, along with a solid understanding of IPv6 networking concepts.</p><p><br></p><p>Key Responsibilities</p><p>Storage Administration & Management</p><ul><li>Configure, administer, and maintain enterprise storage systems including:</li><li>SAN</li><li>NAS</li><li>Direct Attached Storage (DAS)</li><li>Ensure high availability, performance, and reliability of storage environments</li><li>Monitor storage capacity, health, and performance metrics</li><li>Identify and resolve storage bottlenecks and performance issues</li></ul><p>Backup & Disaster Recovery</p><ul><li>Implement, manage, and validate backup solutions</li><li>Perform regular backups and disaster recovery procedures</li><li>Ensure data integrity, availability, and recoverability</li></ul><p>IPv6 & Network Integration</p><ul><li>Configure storage technologies to operate in IPv6 and IPv4 environments</li><li>Support IPv6 addressing, routing, and communication for storage systems</li><li>Troubleshoot connectivity and performance issues related to IPv6-enabled storage</li></ul>
<p>Robert Half is seeking a Senior Software Engineer with AI‑enabled development experience to support the modernization of a real-time, high-availability air traffic management platform. This role offers the opportunity to work on safety-critical systems that directly support national airspace operations in a collaborative, mission-driven environment.</p><p>You will contribute to the development and sustainment of complex software systems using both traditional systems engineering practices and AI‑augmented development techniques throughout the Software Development Life Cycle (SDLC).</p><p><br></p><p>Key Responsibilities:</p><ul><li>Design, develop, test, and maintain software for real-time, high-availability systems</li><li>Apply AI-assisted development tools to accelerate coding, refactoring, debugging, and automated test generation</li><li>Utilize AI responsibly across the full SDLC including:</li><li>Requirements analysis</li><li>System design</li><li>Implementation</li><li>Testing</li><li>Documentation</li><li>Code review</li><li>Analyze complex system requirements and translate them into efficient, maintainable software designs</li><li>Develop and maintain automation scripts across development, test, and production environments</li><li>Promote code quality, reuse, traceability, and cross-team collaboratio</li></ul>